Categories
Cloud Hosting

Rackspace Incident Highlights How Disruptive Attacks on Cloud Providers Can Be – DARKReading

A Dec. 2 ransomware attack at Rackspace Technology which the managed cloud hosting company took several days to confirm is quickly becoming a case study on the havoc that can result from a single well-placed attack on a cloud service provider.

The attack has disrupted email services for thousands of mostly small and midsize organizations. The forced migration to a competitor's platform left some Rackspace customers frustrated and desperate for support from the company. It has also already prompted at least one class-action lawsuit and pushed the publicly traded Rackspace's share price down nearly 21% over the past five days.

"While it's possible the root cause was a missed patch or misconfiguration, there's not enough information publicly available to say what technique the attackers used to breach the Rackspace environment," says Mike Parkin, senior technical engineer at Vulcan Cyber. "The larger issue is that the breach affected multiple Rackspace customers here, which points out one of the potential challenges with relying on cloud infrastructure." The attack shows how if threat actors can compromise or cripple large service providers, they can affect multiple tenants at once.

Rackspace first disclosed something was amiss at 2:20 a.m. EST on Dec. 2 with an announcement it was looking into "an issue" affecting the company's Hosted Exchange environment. Over the next several hours, the company kept providing updates about customers reporting email connectivity and login issues, but it wasn't until nearly a full day later that Rackspace even identified the issue as a "security incident."

By that time, Rackspace had already shut down its Hosted Exchange environment citing "significant failure" and said it did not have an estimate for when the company would be able to restore the service. Rackspace warned customers that restoration efforts could take several days and advised those looking for immediate access to email services to use Microsoft 365 instead. "At no cost to you, we will be providing access to Microsoft Exchange Plan 1 licenses on Microsoft 365 until further notice," Rackspace said in a Dec. 3 update.

The company noted that Rackspace's support team would be available to assist administrators configure and set up accounts for their organizations in Microsoft 365. In subsequent updates, Rackspace said it had helped and was helping thousands of its customers move to Microsoft 365.

On Dec. 6, more than four days after its first alert, Rackspace identified the issue that had knocked its Hosted Exchange environment offline as a ransomware attack. The company described the incident as isolated to its Exchange service and said it was still trying to determine what data the attack might have affected. "At this time, we are unable to provide a timeline for restoration of the Hosted Exchange environment," Rackspace said. "We are working to provide customers with archives of inboxes where available, to eventually import over to Microsoft 365."

The company acknowledged that moving to Microsoft 365 is not going to be particularly easy for some of its customers and said it has mustered all the support it can get to help organizations. "We recognize that setting up and configuring Microsoft 365 can be challenging and we have added all available resources to help support customers," it said. Rackspace suggested that as a temporary solution, customers could enable a forwarding option, so mail destined to their Hosted Exchange account goes to an external email address instead.

Rackspace has not disclosed how many organizations the attack has affected, whether it received any ransom demand or paid a ransom, or whether it has been able to identify the attacker. The company did not respond immediately to a Dark Reading request seeking information on these issues. In a Dec. 6. SEC filing, Rackspace warned the incident could cause a loss in revenue for the company's nearly $30 million Hosted Exchange business. "In addition, the Company may have incremental costs associated with its response to the incident."

Messages on Twitter suggest that many customers are furious at Rackspace over the incident and the company's handling of it so far. Many appear frustrated at what they perceive as Rackspace's lack of transparency and the challenges they are encountering in trying to get their email back online.

One Twitter user and apparent Rackspace customer wanted to know about their organization's data. "Guys, when are you going to give us access to our data," the user posted. "Telling us to go to M365 with a new blank slate is not acceptable. Help your partners. Give us our data back."

Another Twitter user suggested that the Rackspace attackers had also compromised customer data in the incident based on the number of Rackspace-specific phishing emails they had been receiving the last few days. "I assume all of your customer data has also been breached and is now for sale on the dark web. Your customers aren't stupid," the user said.

Several others expressed frustration over their inability to get support from Rackspace, and others claimed to have terminated their relationship with the company. "You are holding us hostages. The lawsuit is going to take you to bankruptcy," another apparent Rackspace customer noted.

Davis McCarthy, principal security researcher at Valtix, says the breach is a reminder why organizations should pay attention to the fact that security in the cloud is a shared responsibility. "If a service provider fails to deliver that security, an organization is unknowingly exposed to threats they cannot mitigate themselves," he says. "Having a risk management plan that determines the impact of those known unknowns will help organizations recover during that worst case scenario."

Meanwhile, the lawsuit, filed by California law firm Cole & Van Note on behalf of Rackspace customers, accused the company of "negligence and related violations" around the breach. "That Rackspace offered opaque updates for days, then admitted to a ransomware event without further customer assistance is outrageous, a statement announcing the lawsuit noted.

No details are publicly available on how the attackers might have breached Rackspace's Hosted Exchange environment. But security researcher Kevin Beaumont has said his analysis showed that just prior to the intrusion, Rackspace's Exchange cluster had versions of the technology that appeared vulnerable to the "ProxyNotShell" zero-day flaws in Exchange Server earlier this year.

"It is possible the Rackspace breach happened due to other issues," Beaumont said. But the breach is a general reminder why Exchange Server administrators need to apply Microsoft's patches for the flaws, he added. "I expect continued attacks on organizations via Microsoft Exchange through 2023."

Link:

Rackspace Incident Highlights How Disruptive Attacks on Cloud Providers Can Be - DARKReading

Categories
Cloud Hosting

LastPass cloud breach involves ‘certain elements’ of customer information – SC Media

LastPass on Wednesday reported that it detected unusual activity within a third-party cloud service thats shared by LastPass and its GoTo affiliate an event that was the companys second reported breach in three months.

In an update blog to customers, LastPass CEO Karim Toubba said the unauthorized party, using information obtained in the earlier August 2022 incident, gained access to "certain elements" of customer information.

Toubba said LastPass launched an investigation, hired Mandiant, and alerted law enforcement.

We are working diligently to understand the scope of the incident and identify what specific information has been accessed, wrote Toubba In the meantime, we can confirm that LastPass products and services remain fully functional.

Its concerning to hear that LastPass experienced another security incident following a previous one that was made public back in August, said Chris Vaughan, vice president, technical account management, EME at Tanium. Vaughan said the attack involved source code and technical information being taken from unauthorized access to a third-party storage service the company uses.

The new breach is more severe because customer information has been accessed, which wasnt the case previously, Vaughan said. The intruder has done this by leveraging data exposed in the previous incident to gain access to the LastPass IT environment. The company says that passwords remain safely encrypted and that it is working to better understand the scope of the incident and identify exactly what data has been taken. You can bet that the IT security team is working around the clock on this and their visibility of the network and the devices being connected to it will be severely tested.

Vaughan added that password managers are a challenging, but attractive target for a threat actor, as they can potentially unlock a treasure trove of access to accounts and sensitive customer data in an instant if they are breached.

However, the benefits of using a secure password management solution often far outweigh the risks of a potential breach, said Vaughan. When layered with the other security recommendations, it's still one of the best solutions to prevent credential theft and associated attacks. We just have to hope that customer confidence has not been impacted too much by these recent attacks.

Lorri Janssen-Anessi, director, external cyber assessments at BlueVoyant, added that theres a notion of security with cloud hosting, and while thats somewhat true, organizations must still stay aware of the attack surface that exists on cloud hosted networks, services, or applications.

Companies must still minimize user privileges, patch vulnerable software, be conscious of what assets are actively hosted, and make sure to have secure configurations to include the cloud security settings, said Janssen-Anessi.

Be thoughtful about what you choose to host in the cloud, and dont put critical data or operationally necessary applications that could affect your business continuity in the cloud as you are at the mercy of the hosting provider and their continuity-of-services, said Janssen-Anessi. Like any third-party connection, cloud hosting also needs to be thoughtfully included and secured within your ecosystem.

Read more:

LastPass cloud breach involves 'certain elements' of customer information - SC Media

Categories
Cloud Hosting

Focus on cost and agility to ensure your cloud migration success – CIO

When businesses migrate to public cloud, they expect to enjoy greater agility, resiliency, scalability, security, and cost-efficiency. But while some organizations undergo a relatively smooth journey, others can find themselves embarked on a bumpy trek fraught with time-wasting detours and lurking money pits and with that glowing cloud promise still beyond their reach.

Where do they go awry? Too often, impetuosity and a diminished focus on key business drivers can result in a loss of direction, reports Chris DePerro, SVP, Global Professional Services at NTT.

When assembling the case for a move to public cloud, organizations tend to overload stakeholder expectations and lose sight of the main imperatives behind the initiative namely, supporting business agility and cost-efficiency, DePerro says. When a cloud strategy team has those chief objectives nailed down, they can plan supporting considerations such as security, resiliency and scalability around them more effectively.

The Multicloud Business Impact Brief by 451 Research summarizes the findings of its own Voice of the Enterprise: Cloud, Hosting & Managed Services, Budgets & Outlook 2022 survey and identifies costs as a key driver and desired outcome of cloud transformation as well as a key limiting factor in the use of some of these resources. Indeed, 39% surveyed cited concerns about controlling costs.

But cost-efficiencies from public-cloud adoption can be undermined if organizations overspend to get there. Public-cloud services can be a tremendous resource if proper care is taken to plan and optimize the environment rather than just pushing the entire estate to the cloud as is. If optimization isnt done, clients are often left with a larger bill and fall short of their cloud aspirations.

In many instances, the problems can be traced to inexperience in and insufficient understanding of cloud-migration best practices, and a lack of proper planning. Too often, organizations set off with project plans that do not take account of the full gamut of challenges.

Why do organizations fast-forward cloud migration, even if it might result in headaches afterwards?

Common missteps are not taking time to understand and remediate as many issues as possible within the existing IT estate before migration occurs, says DePerro. Its crucial that the best migration approaches are selected based on solid discovery for each workload. This determines the approach best suited to an organizations specific applications.

DePerro adds: Without a thorough pre-migration assessment of its IT estate, an organization might shift its existing inefficiencies into the cloud, where they become even more of a budgetary and performance burden by bumping up clouds operational costs.

The capacity to innovate and respond to changing market conditions in a rapid manner is even more vital. Cost-efficiency and expectations of agility should be integral to a properly orchestrated cloud-migration program.

Agility is not always well understood, explains DePerro. Increasingly, its about using the cloud to give organizations the facility and flexibility to achieve their business objectives faster, rather than necessarily having numerous added features from the onset. We are seeing more and more customers trying to modernize in an incremental nature so as not to get bogged down in overly complicated transformation. Often, expediency overrides functionality when it comes to getting apps to market fast so that value can be derived ASAP.

This requirement plays into the increased adoption of multicloud models. The business benefits of multicloud are compelling: organizations want to develop/run their applications in the cloud environment thats best suited to their needs: private, public, edge or hybrid.

This in turn enlarges the complexities of managing workloads across multiple platforms.

Working with a managed cloud service provider is a proven way to mitigate those complexities, DePerro says, especially if its cloud reach extends across both multivarious cloud platforms and business industries, as NTTs does. This enables us to share both technical knowledge and cross-sector insight.

Increasing multicloud take-up demonstrates again how rapidly cloud opportunities are evolving, leaving migration roadmaps outdated.

Ultimately, many organizations will have to go multicloud because its the only way they will achieve their business objectives, DePerro believes. For many, multicloud is the new reality. And as with any journey into uncharted territory, being accompanied by a knowledgeable guide such as NTT, that has helped organizations complete their cloud journeys, will help navigate the twists and turns ahead.

Visit NTTs website now to find out how to start your cloud journey with experts who understand the pitfalls and how to overcome them.

The rest is here:

Focus on cost and agility to ensure your cloud migration success - CIO

Categories
Cloud Hosting

Last Year’s Predictions and 4 Kubernetes and Edge Trends to Watch | – Spiceworks News and Insights

The edge computing landscape is evolving fast. How can enterprises best prepare to ride the upcoming trends? In this article, Stewart McGrath, CEO and co-founder of Section, reviews the predictions from last year about Kubernetes and the edge and examines four key trends to look forward to.

As this year draws to a close, I thought it would be a good time to throw out a few predictions about what 2023 holds for the Kubernetes, container orchestration and edge computing landscape. But first, Id like to hold ourselves accountable and look back on the predictions we made this time last year. In retrospect, how did we score?

1. The use of containers at the edge will continue to growThe Internet of Things, online gaming, video conferencing and a whole host of emerging use cases mean the use of containers at the edge will continue to grow. Moreover, as usage increases, so too will organizational expectations. Companies will demand more from edge platform providers in terms of support to help ease deployment and ongoing operations.

This one is tough to measure as theres little data available. This outcome seems inevitable, and anecdotal evidence from conversations with analysts, customers and others in the industry indicates it is, in fact, happening. That said, without hard evidence, I have to give us a N/A on the score check here.

2. Kubernetes will become central to edge computingHosting and edge platforms built to support Kubernetes will have a competitive advantage in flexibly supporting modern DevOps teams requirements. Edge platform providers who can ease integration with Kubernetes-aware environments will attract attention from the growing cloud-native community; for example, leveraging Helm charts to allow application builders to hand over their application manifest and rely on an intelligent edge orchestration system to deploy clusters accordingly.How about 7.5 out of 10 on this one? The overall ecosystem developing around Cloud Native Computing Foundation (CNCF) technologies is growing quickly and extensively. CNCF projects like KubeVirt, Knative, WASM, Krustlet, Dapr and others indicate the growing acceptance of Kubernetes as an operating system of choice for not only containers but also virtual machines and serverless workloads. Providers of limited distribution for Kubernetes clusters, such as VMWares Tanzu, Rafay Systems and Platform9, continue to build and help customers run on multi-location, always-on footprints, while our location-aware global Kubernetes platform as a service grew substantially in its ability to help customers instantly run Kubernetes workloads in the right place at the right time.

3. CDN attempts to reinvent themselves will gain paceIn the year ahead, content delivery networks (CDNs) will increasingly recognize the need to diversify away from the steadily declining margins of large object (e.g., video and download) delivery. In addition to reinventing themselves as application security platforms, CDNs will continue to lean into the application hosting market. Cloudflare and Fastly have built on their existing infrastructure to deliver distributed serverless. We expect other CDNs will enter and/or expand offerings focused on the application hosting market as they seek to capitalize on their investment in building distributed networks. I am going to take a 10 out of 10 here. Akamai indicated a major shift when it spent nearly $1 billion acquiring Linode to plunge headlong into the application hosting space and recently announced its investment in data network company Macrometa. Fastly and Cloudflare have continued to expand their Edge offerings and, at recent conferences, reinforced the importance of their Edge compute plays for the future of their companies.

4. Telcos will riseTelcos will start developing more mature approaches to application hosting and leverage their unique differentiation of massively distributed networks to deliver hosting options at the edge. Additionally, more partnerships will emerge to facilitate the connection between developers and telcos 5G and edge infrastructure to solve their lack of expertise in this space. We were too optimistic, so Ill give this one a 5 out of 10. The telcos do seem to be moving in this direction but are moving at a typical telco pace. While players like Lumen have continued to roll out hosting infrastructure in distributed footprints, we did not see a monumental shift released by any telco during 2022.

See More: Whats Next for DevOps? Four DevOps Predictions for 2023

Overall, Id give ourselves 22.5 out of 30, or 75% (having removed the N/A score). Definitely a passing mark, but some headroom for excellence this year!

See More: Predictions for Service Mesh and Microservices: What Does 2023 Have in Store?

Kubernetes environments allow for the dynamic scheduling of non-related workloads in a single cluster. With the development of greater levels of Kubernetes abstraction and the hardening of security and observability, I can see a world where providers of Kubernetes clusters will announce the availability of their clusters to a general global pool of available resources on which a developer could deploy workloads.

Each cluster will be able to describe its attributes (location, capacity, compliance, etc.), and devs will be able to let an overall orchestration system match workload requirements to underlying attributes of contributed clusters (e.g., needs GPU, PCI DSS, specific always on locations, etc.). This will be the next evolution of cloud computing: a dynamic cloud of clusters.

The Kubernetes ecosystem has continued to demonstrate remarkable growth over the past 12 months. I have no doubt well see further evolution in the coming year as the demand for better automation of deployment, scaling and management of containerized applications is clear.

Whats your take on the trends predicted? Share your thoughts with us on Facebook, Twitter, and LinkedIn.

Image Source: Shutterstock

Visit link:

Last Year's Predictions and 4 Kubernetes and Edge Trends to Watch | - Spiceworks News and Insights

Categories
Cloud Hosting

EU Officially Adopts Digital Markets Act to Target Anti-Competitive Behavior in the Online Marketspace – JD Supra

As the name implies, the DMA is targeted at large online platforms including those platforms that both consumers and businesses interact with and use on a daily basis including search engines, social networks, online advertising tech providers, cloud computing, and online messaging.

Globally, including in both the US and the EU, legislators and government bodies have raised concerns that large online platforms are able to gatekeep and engage in anti-competitive behavior negatively impacting both daily consumer life as well as businesses ability to operate. The DMA is the EUs effort to open up a fairer online marketplace.

The DMA does not come alone. As noted in our previous alert, the DMA has been developed in close alignment with the Digital Services Act (DSA), together forming the Digital Services Package. The DSA applies to digital services, which broadly include intermediary services, hosting services, online platforms, and very large platforms. Once implemented and effective in January 2024, the DSA will set the standard for requirements related to fairness, transparency, and responsibility that online services must comply with. Related, the DMA focuses on regulating anti-competitive and monopolistic behavior in the technology and online platform (digital and mobile) industries. The DMA is on the forefront of a trend globally of looking to antitrust legislation as a way to regulate technology companies and online services.

On top of anti-competition rules and prohibitions, the DMA also further places personal data processing and data minimization principles on in-scope entities.

Some of the most important aspects of the DMA are that it will requireamong many requirementsin-scope businesses to (1) allow end-users to unsubscribe from the online service just as easily as it is to sign up; (2) obtain consent from end-users if the in-scope business is tracking end-users outside of the actual online service for purposes of targeted advertising; and (3) allow third-party apps and app stores to interoperate with the businesses online service (e.g., can no longer require users to only use the in-scope business apps and app stores).

With the DMA officially adopted, below are some essential insights to help impacted entities prepare for the legislations implementation.

Who Does the DMA Impact?

Although the DMA is a component of the EUs broad regulation of online platforms, it will only impose obligations on a small number of very large online platforms that act as Gatekeepers. This contrasts with the DSA, which will likely apply to a broader swatch of businesses operating online.

These Gatekeepersunder the DMAare core online platforms, such as online search engines, marketplaces, and social networks, that offer gateway services between consumers and businesses that have become indispensable to thousands of businesses and millions of users. Some of those platforms exercise control over whole platform ecosystems in the digital economy and are structurally difficult to challenge or contest by existing or new market operators, irrespective of how innovative and efficient those market operators may be. As a result, the likelihood increases that the underlying markets do not function efficiently with respect to these entities.

Thus, the intent of the DMA (according to the EU governing bodies) is aimed at regulating these largely uncontestable platforms to circumvent such market failures, while also opening up the markets for broader competition to the benefit of businesses and consumers.

To qualify as a Gatekeeper, a company must fit within the DMAs narrowly defined, quantitative, objective criteria. To achieve Gatekeeper status the company must meet the following:

(1) Significant impact on the internal market. The company must have an EU annual turnover above 7.5 billion, a market capitalization over 75 billion, and it must provide the same core platform service in at least three Member States;

(2) Provide a core platform service for business users to reach end users. The company must have at least forty-five (45) million active monthly end-users and ten thousand (10,000) yearly business users in the EU; and

(3) Durable and stable position in the market. The company must have an entrenched and durable position in the market, meaning that it is stable over timeif the company met the criteria in point (2) above in each of the last three (3) financial years.

Platforms and businesses that meet these criteria are presumed to be a Gatekeeper and are required to inform the European Commission within two (2) months of meeting the thresholds. The Commission then designates the company as a Gatekeeper unless the company provides compelling evidence to the contrary. The Gatekeeper status is then re-evaluated every three (3) years.

Obligations and Restrictions

The DMA establishes obligations for Gatekeepers and outlines what Gatekeepers may no longer do.

The bulk of the DMAs regulatory requirements revolve around allowing other businesses to promote their online offerings more easily to end-users on the Gatekeeper platforms and allowing end-users to access othersimilaronline services and offerings from third-party businesses through the Gatekeepers platforms. It essentially boils down to requiring Gatekeepers to allow more direct access and communication between third-party businesses and consumers.

For example, the DMA obligates Gatekeepers to (i) allow third parties to interoperate with the Gatekeepers own services in certain situations, (ii) allow their business users to access the data that they generate in their use of the Gatekeepers platform, (iii) provide companies advertising on their platform with the tools and information necessary for advertisers and publishers to carry out their own independent verification of their advertisements hosted by the gatekeeper, and (iv) allow their business users to promote their offer and conclude contracts with their customers outside the Gatekeepers platform.

Alternatively, Gatekeepers may no longer (i) treat services and products offered by the Gatekeeper itself more favorably in ranking than similar services or products offered by third parties on the Gatekeeper's platform, (ii) prevent consumers from linking up to businesses outside their platforms, (iii) prevent users from un-installing any pre-installed software or app, and (iv) track end users outside of the Gatekeepers core platform service for the purpose of targeted advertising, without effective consent.

Impact on Online Advertising

Importantly, the DMA also implements personal data and tracking-related regulations that closely align with the EUs data collection principles originally set forth in the General Data Protection Regulation (GDPR).

Without obtaining an end-users specific consent, Gatekeepers are prohibited from (i) processing personal data for advertising purposes if the personal data comes from an end users interactions with a third party using the Gatekeepers online platform or service (e.g., the end-user has no direct connection to the Gatekeeper); (ii) combining personal data obtained from one core online platform service with the personal data obtained from third party services; (iii) using personal data obtained through one of the Gatekeepers core online platforms or services in the Gatekeepers other online platforms or services; and (iv) signing in end users to other services the Gatekeeper provides for the purpose of combining personal data.

The above prohibitions and consent requirements tie into the GDPRs principle of data minimizationusing the minimum amount of personal data and only for the purpose of which a business has informed the end-user.

Gatekeepers will need to build out consent mechanisms and procedures if they hope to continue the above data collection and processing practices for their own purposes and as a part of the services, they provide third parties (e.g., advertising and analytic services).

Penalties

The European Commission, supported by the national competition authorities, will carry out enforcement of the DMA. The Commission will have the sole authority to initiate proceedings and make infringement decisions. The DMA sets maximum fines based on a percentage of a companys global annual turnover. If a Gatekeeper fails to adhere to the requirements in the DMA, the Commission can impose fines of up to 10% of the companys total global annual turnover, and up to 20% for repeated infringements. Additionally, the Commission can impose periodic penalty payments of up to 5% of the Gatekeepers average daily turnover. In the case of systematic infringements of the DMA obligations, the Commission also has the authority to impose additional remedies necessary to achieve compliance. These remedies can include behavioral and structural remedies such as the forced sale of parts of the business, or a prohibition on the acquisition of other companies in the digital sector, but in any case, must be proportionate to the offense committed.

Key Takeaways

The DMA will be applicable as of May 2023. This gives potentially affected entities roughly six (6) months to assess their Gatekeeper status and jump-start their compliance efforts. Once in effect, the DMA aims to provide a fairer online business environment and intends to create new opportunities for innovators and technology start-ups to compete in the online platform environment without having to comply with unfair terms and conditions limiting their development. Lastly, the DMA anticipates that consumers will have more and better services to choose from, more opportunities to switch their providers, direct access to services, and fairer prices. Put simply, the DMA plans to prevent Gatekeepers from using unfair practices toward the businesses and customers that depend on them to gain an undue market advantage.

Read this article:

EU Officially Adopts Digital Markets Act to Target Anti-Competitive Behavior in the Online Marketspace - JD Supra

Categories
Cloud Hosting

AFME and Protiviti report calls for coordinated approach for further cloud innovation – Asset Servicing Times

AFME and Protiviti report calls for coordinated approach for further cloud innovation

65 per cent of cloud services are provided by just three entities, whose dominance is raising concerns among financial regulators, a report by the Association for Financial Markets in Europe (AFME) has found.

The AFME report, entitled State of Cloud Adoption in Europe - Preparing the path for Cloud as a Critical Third-party Solution, finds that this entity dominance, among other factors, is making it more difficult for firms to adopt cloud services and fully leverage their potential.

Financial institutions are also subject to multiple different regulators that may ask for the same information in different formats and through different channels, the report adds, with regulatory fragmentation and long approval times preventing financial institutions from innovating and slowing the pace of cloud adoption.

AFME cited the management of disruption in the cloud as a significant barrier to further innovation, with several high-profile cloud service outages highlighting the need for greater visibility and confidence in cloud providers abilities to predict, manage and communicate disruptions.

This is mainly due to the fact that regulators expect financial institutions to have primary responsibility for resisting threats to operational resilience, to guard against service disruptions and to recover from incidents, the report outlines.

AFME suggests nine recommendations for policymakers to address these challenges, including considering how cloud solution providers (CSPs) could be encouraged to provide greater transparency on resiliency, dependency and security issues within cloud services.

It outlines that greater visibility and analysis of dependencies between regions and the underlying control plane within each CSP is paramount. It also says that the adoption of multi-cloud strategies should remain at the discretion of individual financial institutions and should not be mandatory. Such a mandate could increase, rather than address, systemic concentration risk, the association warns.

In terms of regulatory complexity, AFME requests that authorities consider an approval model for deploying services to the cloud at a platform level, or remove time requirements for notifications, to reduce delays in the approval process.

The association also encourages greater coordination between the European Central Bank, European Supervisory Authorities and National Competent Authorities to ensure a consistent application of outsourcing. It asks that Information and Communication Technologies third-party registers also ensure minimum duplication for financial institutions and supervisors.

In addition, it requests that policymakers and regulators refrain from requiring localisation of data or cloud hosting solutions, as this challenges resilience, inhibits innovation and increases operational complexity.

Published in collaboration with Protiviti, AFMEs report follows a September 2021 publication outlining the key regulatory barriers to the greater adoption of cloud services in capital markets Building Resilience in the Cloud.

Fiona Willis, associate director of technology and operations at AFME, says: The benefits of cloud technology for the growth of the financial services sector are clear. [It allows] financial institutions to deliver agile, scalable and resilient services to their clients. However, our report finds the rate of adoption of cloud technology is currently being held back by overly complex and unharmonised regulation.

She adds: AFME members believe it is essential that policymakers, in the EU and globally, do not inadvertently impact the continued adoption of cloud services. We therefore make key recommendations to help ensure regulators and policymakers can work together to unlock the full potential of cloud opportunities for the financial services sector.

James Fox, director of Enterprise Cloud at Protiviti, comments: Regulators are quite rightly taking steps to make sure that the application of cloud technologies within financial services is properly regulated to avoid any potential risks or issues that could harm the global financial system.

However, a careful balancing act needs to be struck between properly regulating cloud technologies and not stifling innovation and competition within the financial services sector, and as our recent report shows, the current regulatory complexity is making it more difficult for financial institutions to adopt the cloud.

Follow this link:

AFME and Protiviti report calls for coordinated approach for further cloud innovation - Asset Servicing Times

Categories
Cloud Hosting

NICE Enlighten XO Receives 2022 Industry Award from Speech Technology for Boosting Contact Center Performance with AI-Based Solutions – Yahoo Finance

Enlighten XO advanced AI builds smart self-service, delivering CXi through a unified suite of applications on the CXone platform

HOBOKEN, N.J., December 08, 2022--(BUSINESS WIRE)--NICE (Nasdaq: NICE) today announced that Speech Technology magazine has named NICE a 2022 Top Ten Speech Industry award winner for boosting contact center performance and functionality with its artificial-intelligence (AI) based capabilities powered by its Enlighten AI engine, as well as NICE CXones advanced analytical and digital capabilities. The Speech Telephony awards program recently highlighted developments in speech technologies across a range of industries and advanced technology providers.

According to Speech Technology, "When it comes to cloud contact center solutions, one could easily make the case that NICE CXone is among the leading integrated platforms in the industry. And in the past year, NICE really boosted performance and functionality with a slew of artificial intelligence-based capabilities powered by its Enlighten AI engine."

Speech Technology continued, "An example is the new Enlighten XO, which automatically generates insights from human conversations to build smart self-service with advanced AI. Enlighten XO analyzes 100 percent of interactions from any voice or text platform to discover opportunities for automation. Purpose-built AI models identify customer intents, training phrases, and problem-solving activities."

Speech Technology also pointed out other contributions by NICE to contact center performance excellence:

NICE Customer Experience Interactions (CXi) is a framework delivered through a unified suite of applications on the CXone platform. CXi empowers organizations to meet customers wherever their journeys begin, enables resolution through AI and data-driven self-service, and prepares agents to resolve customer issues. The CXi approach combines CCaaS, workforce optimization, speech and text analytics, artificial intelligence, and digital self-service.

NICE introduced Enlighten AI for Complaint Management, which automatically identifies and categorizes consumer complaints and automates the remediation process. Driven by NICEs AI engine, the solution analyzes 100 percent of interactions across all communication channels and operationalizes root-cause insights to protect organizations from reputational and compliance risks. NICE Enlighten AI for Complaint Management also serves as an early warning system, notifying companies of the potential risk of regulatory action.

NICE partnered with Google Cloud, integrating CXone with Google Cloud Contact Center Artificial Intelligence (CCAI), enabling intelligent natural language capabilities across the customer journey, including self-service bots and agent-facing virtual assistants. CXone Virtual Agent Hub allows businesses to expand their customer self-service capabilities with conversational bots for voice and chat that leverage Google Clouds Contact Center AI.

Story continues

"NICE is taking the digital-first customer experience to the next level through the power of AI," said Barry Cooper, President, CX Division, NICE. "We are proud to receive this important award that reinforces our role as an industry leader helping brands deliver data-driven, actionable insights in real time as they automatically learn from every interaction."

For further information on NICE Enlighten AI for CX, please visit our website here.

About Speech TechnologyInformation Today, Inc. (ITI), located in Medford, N.J., is the parent company of Speech Technology Media, producers of Speech Technology magazine, SpeechTechMag.com, and the SpeechTEK conference. SpeechTEK and Speech Technology magazine are recognized worldwide as the leading sources of news, information, and analysis relating to the speech technology industry. Both provide additional sources of news, information, and analysis through online communities at speechtek.com and speechtechmag.com, and opt-in electronic distribution networks, STM eWeekly, and The Speech Technology Bulletin.

About NICEWith NICE (Nasdaq: NICE), its never been easier for organizations of all sizes around the globe to create extraordinary customer experiences while meeting key business metrics. Featuring the worlds #1 cloud native customer experience platform, CXone, NICE is a worldwide leader in AI-powered self-service and agent-assisted CX software for the contact center and beyond. Over 25,000 organizations in more than 150 countries, including over 85 of the Fortune 100 companies, partner with NICE to transform - and elevate - every customer interaction. http://www.nice.com.

Trademark Note: NICE and the NICE logo are trademarks or registered trademarks of NICE Ltd. All other marks are trademarks of their respective owners. For a full list of NICE marks, please see http://www.nice.com/nice-trademarks.

Forward-Looking StatementsThis press release contains forward-looking statements as that term is defined in the Private Securities Litigation Reform Act of 1995. Such forward-looking statements, including the statements by Mr. Cooper, are based on the current beliefs, expectations and assumptions of the management of NICE Ltd. (the "Company"). In some cases, such forward-looking statements can be identified by terms such as "believe," "expect," "seek," "may," "will," "intend," "should," "project," "anticipate," "plan," "estimate," or similar words. Forward-looking statements are subject to a number of risks and uncertainties that could cause the actual results or performance of the Company to differ materially from those described herein, including but not limited to the impact of changes in economic and business conditions, including as a result of the COVID-19 pandemic; competition; successful execution of the Companys growth strategy; success and growth of the Companys cloud Software-as-a-Service business; changes in technology and market requirements; decline in demand for the Company's products; inability to timely develop and introduce new technologies, products and applications; difficulties or delays in absorbing and integrating acquired operations, products, technologies and personnel; loss of market share; an inability to maintain certain marketing and distribution arrangements; the Companys dependency on third-party cloud computing platform providers, hosting facilities and service partners;, cyber security attacks or other security breaches against the Company; the effect of newly enacted or modified laws, regulation or standards on the Company and our products and various other factors and uncertainties discussed in our filings with the U.S. Securities and Exchange Commission (the "SEC"). For a more detailed description of the risk factors and uncertainties affecting the company, refer to the Company's reports filed from time to time with the SEC, including the Companys Annual Report on Form 20-F. The forward-looking statements contained in this press release are made as of the date of this press release, and the Company undertakes no obligation to update or revise them, except as required by law.

View source version on businesswire.com: https://www.businesswire.com/news/home/20221208005072/en/

Contacts

Corporate Media Contact Cindy Morgan-Olson, +1 646 408 5896, ETCindy.morgan-olson@niceactimize.com

Investors Marty Cohen, +1 551 256 5354, ETir@nice.com

Omri Arens, +972 3 763 0127, CETir@nice.com

Visit link:

NICE Enlighten XO Receives 2022 Industry Award from Speech Technology for Boosting Contact Center Performance with AI-Based Solutions - Yahoo Finance

Categories
Cloud Hosting

Taking control of your cloud: How to stop the weakening rand from negatively affecting your technology strategy – ITWeb

The depreciation of the rand against the US dollar reached its highest point of the year in early November, at R18.41 to the dollar. According to a recent Moneyweb piece, this can be attributed to a number of reasons, with interest hikes in the US and the promise of higher yields causing investors to move away from emerging markets, while the Russian war in Ukraine, lockdowns in China and more continue to fuel an inflation that had already started in 2021 as economies recovered from the pandemic.

The weakening rand brings with it a host of knock-on effects, from inflation increases to a more negative general sentiment within the country, explains Jaap Scholten, Head: Group Hybrid IT Strategy at Datacentrix, a hybrid IT systems integrator and managed services provider.

It also has a direct impact on the technology sector, in particular the cloud consumption of local businesses, with many organisations hosting workloads with hyperscalers now finding themselves roughly 20% over budget year on year.

Add to this the ever-growing volume of data being created today, and you're left with companies that desperately need to find another solution to balance the management of the increasing data and technology budgets.

According to Scholten, should a business find itself in this position, it's time to reconsider your cloud set-up and strategy.

One consideration for organisations is whether to switch solution providers, preferably to a local partner that fixes costs in rands, as this circumvents the fluctuating exchange rate challenges. It is also important to find a provider with no data egress costs. As much as feeding data into the cloud (data ingress) is effectively free, it can become costly to get it out again (egress), particularly when it comes to paying for it in dollars. Workloads with a high transaction rate of data in and out, such as databases, suffer the most in terms of data egress costs.

Another important advantage of local hosting is the fact that data sovereignty is ensured, meaning that compliance with local data privacy and security regulations will also be in place.

These challenges can easily be overcome with the hybrid cloud model from Datacentrix, which is hosted in Teraco's highly available environment and powered by the Hewlett Packard Enterprise (HPE) GreenLake edge-to-cloud platform.

This Africa-first cloud offering delivers an 'as-a-service' experience that provides a base load combined with on-demand capacity, providing the agility and economics of public cloud with the security and performance of on-premises IT.

The discussion around consumption-based IT and its ability to offer flexibility and scalability associated with cloud while maintaining on-premises autonomy over an organisation's data is not new, states Scholten, nor is the narrative around the cost benefits of pay-per-use economics that eliminate investment in excess compute, storage or networking capacity.

However, it is worth unpacking the total economic impact of our hybrid cloud solution to get a better picture of the holistic cost benefits that it delivers to the modern enterprise, he continues.

With Datacentrix, you effectively pay for a baseload set at a certain threshold, plus whatever you consume over and above that baseload, on a varying basis. The financial commitment is thus made on the baseload, with calculable costs as you scale up, which makes for a predictable financial outlay if you need more capacity for your baseload.

Importantly, our pricing is fixed in South African rands at the beginning of the term, anywhere between one and three years, and even scale-out options during the hosting contract will still be quoted at the same unit price. This ultimately adds more stability and predictability to your financial commitment, while also negating the often expensive egress costs, he adds.

There are various intangible factors that should be considered too. For instance, if you work with a reputable systems integrator, you could gain high availability across multiple data centres, which effectively means no downtime. In addition, if the deployment is in the right data centre, you also gain significantly from connectivity savings a cable into the African Cloud Exchange will give you a gigabit per second or faster connectivity for the cost of a cross-connect cable.

We also believe our customers can expect to see a significant drop in historic total cost of ownership (TCO). With traditional IT infrastructure, there are issues pertaining to growth and capacity planning where those nasty financial surprises usually slip in. For instance, there may be no compatible hardware available when wanting to upgrade, or a lack of integration between disparate infrastructure, or even significant price hikes on new generation equipment. Datacentrix's cloud model eliminates these problems and provides their customers with a clear financial path forward.

Lastly, it is worth mentioning that, with Datacentrix, you do not need highly skilled staff members to keep your systems running. Should you already have these skills in place, they can be applied to rather optimise your deployments, instead of having them look after hardware, storage and networking stability, he concludes. Essentially then, not only can this reduce maintenance cost, but also accelerate improvements and transformation.

Read the original post:

Taking control of your cloud: How to stop the weakening rand from negatively affecting your technology strategy - ITWeb

Categories
Dedicated Server

Ampere Cloud Native Processors Can Help Achieve Sustainability Goals – Forbes

Environmental sustainability is no longer just a corporate social responsibility but a business imperative. That was evident at the recent Open Compute Project (OCP) Global Summit, where sustainability was the theme in almost every booth.

We are all aware of the rising energy consumption of hyperscale data centers the comparisons to towns and even small countries. Of course, the solution is straightforward - more compute with less power. This opportunity has created a new wave of innovative startup chipset companies. I recently wrote an article about Groq, which focuses on discrete chips for specialized workloads such as artificial intelligence (AI).

Ampere is another rising star with the same goal but a different strategy. Ampere is designing purpose-built, multi-core processors optimized for multi-tenant cloud workloads.

Ampere

Time for a new type of CPU

I am an avid chip guy who has been in the industry for many years. This story reminded me of the transition from mainframes to a client-server model a couple of decades ago. First, the usage model changed, followed by the adaptation of the software to become higher performance more and more efficient, and then finally, the hardware changed. We are going through a similar transition right now. Cloud-native has happened from an architectural perspective and also from a software perspective. It is time for a cloud-native processor to drive higher performance and improved sustainability.

Enterprise-class processors the wrong tool for the cloud

CSPs must have profitable cloud infrastructure while delivering the performance customers expect. Enterprise-class processors are not the right tool for cloud-native workloads and can negatively impact customer SLAs and add unpredictability.

Enterprise-class processors support applications written as monolithic blocks of code executed in a dedicated environment. Enterprise-class processors have evolved with increasing power and large amounts of memory with innovations such as high-capacity cores, higher CPU frequencies, and larger caches.

Simultaneous multithreading (SMT) and Turbo Boost are enhancements that can be effective for the correct use case but not for latency-sensitive, scalable cloud-native workloads.

SMT enables each physical core to split into two logical threads simultaneously executing separate instruction sequences. Turbo Boost automatically runs cores faster than the rated operating frequencies.

SMT can cause problems in a multi-tenant cloud environment. Multiple applications must share execution resources when a CPU core has two SMT threads. A resource-intensive workload, known as a noisy neighbor, will cause workloads on the other SMT thread will slow from lack of execution resources. Similarly, Turbo Boost can cause unpredictable performance depending on the type of workload.

In summary, enterprise-class cores perform well with computationally intensive applications but not in a shared cloud infrastructure.

Ampere taking a clean-slate approach to CPU design

Cloud-native applications have distributed components, such as micro-services, that perform specific tasks and collaborate to achieve an objective. A benefit is that lightweight applications are faster to develop, test, and integrate, leading to practices like continuous integration/continuous delivery (CI/CD).

Ampere took a clean-slate approach to CPU design that caters to this new software paradigm in multi-tenant environments. CSPs want to host more end users per server with dedicated physical cores. Ampere responded with a processor design that offers near-linear scaling across a 128-core Altra processor.

Ampere's processors are immune to the "noisy neighbor" issues that plague enterprise-class processors with SMT. Ampere's processor cores are single-threaded, resulting in no resource contention and more predictable performance. With Ampere, each thread runs solo on a single core.

Changing the game in terms of sustainability

Ampere starts with single-threaded cores - one thread, one process - different from the enterprise-class processors with multiple threads competing for processor resources. Multiple threads create an unpredictable environment, whereas a single-threaded core provides the same performance with every core resulting in a high degree of predictability.

Ampere uses a constant operating frequency delivering predictability as opposed to x86 processors that employ frequency scaling. Ampere uses power-efficient cores, which allow the stacking of many cores into a processor. The highest core counts in the industry.

Ampere pipelines cores with a large private L2 cache located right next to the core. Loading data and instruction sets into the L2 cache results in a predictable performance profile for each core.

The high number of cores makes it cost-effective for CSPs to rent each core to a single customer. Customers get scalability and predictable performance. CSPs can run at the lowest possible power with fully populated racks with no stranded capacity. Running at the lowest power reduces the thermal load. The net result is a quantum leap in efficiency and maximum performance per rack.

A strong endorsement from Microsoft Azure

Microsoft now offers Azure Virtual Machines with Ampere Altra Arm-based processors that can run Linux workloads such as web servers, open-source databases, in-memory applications, big data analytics, gaming, and media.

An Ampere Altra VM is a high-performance compute alternative that scales up linearly, delivers predictable performance at full utilization, and is power efficient, directly reducing users overall carbon footprint.

Wrapping up

Ampere has addressed the simple question posed at the beginning - more compute with less power. A cloud native processor with single-threaded cores, consistent operating frequencies, and the most power-efficient cores in the industry, topped by a large L2 cache close to the processor resulting in a cloud-native processor built for the sustainable cloud.

Microsoft has proven this isn't a fantasy - the solution exists today. Instead of building out data centers with compute using enterprise-class processors switching to Ampere's cloud-native processors could reduce power consumption by 20 percent and still meet the compute needs.

Other major players have also embraced the Ampere technology. Hewlett Packard Enterprise announced in it will offer a cloud-native server in later in 2022 using Ampere chips. Google Cloud has launched virtual machines using Ampere Altra Arm-based processors. Oracle, in addition to being a major investor, offers a comprehensive line of Ampere platforms on Oracle Cloud Infrastructure (OCI).

Moor Insights & Strategy, like all research and tech industry analyst firms, provides or has provided paid services to technology companies. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking, and speaking sponsorships. The company has had or currently has paid business relationships with 88, Accenture, A10 Networks, Advanced Micro Devices, Amazon, Amazon Web Services, Ambient Scientific, Anuta Networks, Applied Brain Research, Applied Micro, Apstra, Arm, Aruba Networks (now HPE), Atom Computing, AT&T, Aura, Automation Anywhere, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, , C3.AI, Calix, Campfire, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Cradlepoint, CyberArk, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Dialogue Group, Digital Optics, Dreamium Labs, D-Wave, Echelon, Ericsson, Extreme Networks, Five9, Flex, Foundries.io, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Revolve (now Google), Google Cloud, Graphcore, Groq, Hiregenics, Hotwire Global, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Infinidat, Infosys, Inseego, IonQ, IonVR, Inseego, Infosys, Infiot, Intel, Interdigital, Jabil Circuit, Keysight, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, Lightbits Labs, LogicMonitor, Luminar, MapBox, Marvell Technology, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Merck KGaA, Mesophere, Micron Technology, Microsoft, MiTEL, Mojo Networks, MongoDB, National Instruments, Neat, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nutanix, Nuvia (now Qualcomm), onsemi, ONUG, OpenStack Foundation, Oracle, Palo Alto Networks, Panasas, Peraso, Pexip, Pixelworks, Plume Design, PlusAI, Poly (formerly Plantronics), Portworx, Pure Storage, Qualcomm, Quantinuum, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Renesas, Residio, Samsung Electronics, Samsung Semi, SAP, SAS, Scale Computing, Schneider Electric, SiFive, Silver Peak (now Aruba-HPE), SkyWorks, SONY Optical Storage, Splunk, Springpath (now Cisco), Spirent, Splunk, Sprint (now T-Mobile), Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, Telesign,TE Connectivity, TensTorrent, Tobii Technology, Teradata,T-Mobile, Treasure Data, Twitter, Unity Technologies, UiPath, Verizon Communications, VAST Data, Ventana Micro Systems, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zayo, Zebra, Zededa, Zendesk, Zoho, Zoom, and Zscaler.

Moor Insights & Strategy founder, CEO, and Chief Analyst Patrick Moorhead is an investor in dMY Technology Group Inc. VI, Dreamium Labs, Groq, Luminar Technologies, MemryX, and Movandi.

Read more:

Ampere Cloud Native Processors Can Help Achieve Sustainability Goals - Forbes

Categories
Dedicated Server

After server shutdown, Kakao pledges to triple investment over next 5 years – The Korea Herald

Kakaos former and current officials speak about the cause of the Kakao service outage and future preventive measures during the online conference on Wednesday. (Kakao)

South Korean IT giant Kakao vowed Wednesday to triple its investment in efforts to set up more stable service operations after the company faced heat when its major services, such as KakaoTalk, were shut down due to a data center fire.

Kakao will invest more than three times the amount it invested in the past five years to secure human resources and develop technologies for service stabilization while implementing disaster recovery with a triple structure or higher over the next five years, said Ko Woo-chan, a subcommittee co-chair under Kakaos emergency countermeasure committee, in an online conference.

Kakao did not disclose the amount of investment it had used for service stabilization in the past five years. A Kakao official told The Korea Herald that the future investment will include various spending areas including expanding infrastructures such as more servers and upgrading information security.The conference, named "if(kakao)dev 2022," is the IT companys annual developers event. But this years edition has been dedicated to looking back at what went wrong during the service blackout in October and what the company could do better in the future to prevent such a mishap.

Kakaos first data center in Ansan, Gyeonggi Province, began construction in January. With a total of 460 billion won ($350 million) allocated for the project, the data center is expected to begin operation in 2024. Kakao said it is strengthening the data centers infrastructures on electricity, cooling and communication system so that it can stay operational under all circumstances.

Ko added that Kakaos envisioned disaster recovery architecture, which is expected to have three or more data centers, can serve as a backup even if one of the three data centers becomes paralyzed.

According to Kakao, the company plans to set up a new committee on disaster recovery in preparation for large-scale service errors.

Ko pointed out that the company will work with outside experts and partners to diagnose the vulnerability of Kakaos business continuity plan -- an action plan designed to minimize the stoppage of business despite natural disasters or unforeseen incidents.

Namkoong Whon, former co-CEO of Kakao, said the incident made the company realize what was at the core of its environment, social and governance efforts.

Kakaos top ESG priority was providing our services stably. Our (server) duplication measures were like an incomplete bridge. We will do our utmost to ensure such an incident never takes place in the future, he said. Namkoong has been part of Kakaos emergency countermeasure committee after he stepped down from the top position shortly after the service blackout.

Kakao still did not unveil a possible compensation plan for those who claimed to have suffered from the IT companys service meltdown.

According to the Ministry of Science and ICT, there have been about 100,000 damage reports over Kakaos service outage. Of them, 15.1 percent said they used Kakaos free services such as KakaoTalk, the popular messaging app used by over 90 percent of the population, and took a financial hit.

Kakaos consultative body, which was established to discuss damage compensation, carried out two meetings but has not come up with a blueprint for a compensation plan. The members of the consultative body include Kakao officials and representatives of small merchants and a consumer association.

By Kan Hyeong-woo (hwkan@heraldcorp.com)

View post:

After server shutdown, Kakao pledges to triple investment over next 5 years - The Korea Herald