Categories
Dedicated Server

eSIMs to enable CIOs to deliver IoT and savings – Diginomica

(Image by Pete Linforth from Pixabay )

The humble SIM card is following hard on the heels of storage and networking and becoming software-defined. With the physical hardware making way for software, known as the eSIM, connectivity should increase, making the deployment of the Internet of Things (IoT) easier and more cost-effective. CIOs will not only be able to realize IoT ambitions, supporters and early adopters of eSIMs report a lower cost of ownership, increased flexibility and end-user benefits.

It's hard to believe, but the SIM is now 25 years old. The eSIM is an embedded SIM card and, therefore, programmable. Devoid of a removable universal integrated circuit card (the traditional SIM card), an eSIM consists of software installed onto a built-in universal integrated circuit card. First seen in 2016, the technology is backed by the GSMA, the mobile networks industry body.

CIOs and organizations adopting eSIM technology will need to decide on one of two formats. These are the Consumer Solution, which as the name suggests, will typically be seen in a handset or device and allow the owner to choose their operator just as they select which apps to use. The second format, M2M, or machine-to-machine, is dedicated to business use such as IoT.

Unlike the Consumer Solution, M2M has no user interaction and is instead typically operated by a server and management platform by the IT department. Management, operation and storage of the eSIM is done within the device. Security in both is Pre-Shared Key (PSK) and Public Key Infrastructure (PKI) cryptography.

Whichever format CIOs choose, the aim of eSIMs is to reduce the number of restrictions and increase the utility of mobile connectivity. Martin Langmaid, CTO with Belgian telco services provider Venn Telecom says:

We need modularity, and we need to be able to adapt very quickly to different situations. Before this concept of the eSIM, we were juggling draws full of SIMs from different operators, each with different capabilities and IDs. It was a headache to know which SIM and which service, and which device should it be in.

As a technology leader, you find that you need different operators or network capabilities in different regions. You are in a horrible place of physical SIM swaps, which is a logistical nightmare, especially for tablet devices that are airside at an airport or visiting Point of Sale (POS) terminals distributed across the nation.

The digitization of the economy relies on connectivity to be a utility, and that means CIOs cannot rely on one network provider alone. An example is Telli Health, a USA-based maker of remote patient care devices such as glucose monitors. Its healthcare customers were experiencing a high degree of data transmission failure.

Telli Health has switched to eSIMs, using the Eseye M2M eSIM system. Now its customers don't need to worry if a device is connected to any of the five major mobile networks in the USA; the device will use any of them. The Miami headquartered business had considered Bluetooth, but opted for eSIM, Will Dos Santos, Telli Health Sales & Account Executive, says. He adds::

With Bluetooth, there were compatibility issues, and with eSIM, there is no fumbling with apps. The patient simply turns on the device, they take a reading, and that is it; the data is sent to the healthcare organisation. eSIM is a simple process as everything happens in the back end.

Charity CIO Gerard McGovern has been an early adopter of eSIM for his organization, The Guide Dogs for the Blind Association, and is seeing real business benefits, he says:

Turning data on or off or switching tariffs is so much easier with an eSIM. We have 1800 staff, and 300-400 people join the organization or leave a year, and the majority of our staff have a phone.

McGovern says staff at Guide Dogs benefit too, which improves staff retention. He explains:

People want to separate home and work life. Guide Dogs has a high number of phones as our service users are clients and need the ability to call or message us directly. Our mobility specialists don't want to share their own number.

CTO Langmaid adds:

With multiple IDs added to a SIM that has been deployed, we know that in the future, when a customer's needs change, we can support that change using the built-in intelligence of eSIM. Whilst for others, it is about worry-free connectivity. One of our biggest users is in commercial shipping, and their people are travelling and need to know that data will work wherever they are, and at a set fee, so they can just get on with their work.

Langmaid and Venn Telecom are systems integrators and have utilized the Webbing eSIM platform to deploy the Consumer Solution into organizations. Langmaid says:

We can add network operators to a customer in an afternoon. That means we take away the fear of operator regret where a CIO chooses operator X and then finds they wish they'd gone for operator Y.

You are taking back control over your choice of operator and network, and you are changing the power dynamic by being able to remotely manage that demand.

IoT has had plenty of hype, but implementation success stories are less prevalent. One of the reasons has been connectivity challenges. A Beecham Research survey for network hardware and services provider Sierra Wireless found that organizations could not access cost-effective and reliable connectivity to make IoT successful. Inevitably, businesses that have invested in and backed eSIM believe their technology is the answer. CIO David Doherty at Jurassic Fibre, a fibre network provider in Devon, UK, says they're right to be confident. He adds:

eSIMs in devices will make IoT a lot easier for CIOs, especially if they need to change service providers or make upgrades.

Software-defined has been the making of CIOs. Whether it's Software-as-a-Service (SaaS) or the plethora of cloud-based services, CIOs are spending an ever-decreasing amount of time managing bytes and pipes. Moving network connectivity to a software play akin to that of SaaS will again benefit CIOs.

In the public sector, CIOs will be able to lead digital care adoption to improve care outcomes at a lower cost. Whilst IoT will enable improvements in data collection and management and make the CIO and their department the central control unit of the organization as it strives to reduce emissions and costs.

Those business technology leaders that have already adopted eSIMs are seeing improvements in the role technology plays in work/life balance and business

Original post:

eSIMs to enable CIOs to deliver IoT and savings - Diginomica

Categories
Dedicated Server

GLP Captures The Moment For Rapyd’s Grand EDM Dance Finale – Live Design

GLP Captures The Moment For Rapyd's Grand EDM Dance Finale  Live Design

Read the original here:

GLP Captures The Moment For Rapyd's Grand EDM Dance Finale - Live Design

Categories
Cloud Hosting

Demand for Server Virtualization Software Rises as Cloud and OS Technologies Proliferate: Fact.MR Exclusive Analysis – Yahoo Finance

FACT.MR

Demand For High-Speed Data Centers And Rapid Shift To Cloud Computing Systems For Efficient Virtual Server Outcomes Are Driving The Need For Server Virtualization Software

Rockville, Jan. 19, 2023 (GLOBE NEWSWIRE) -- According to Fact.MR, a market research and competitive intelligence provider, the global server virtualization software market is estimated to achieve a valuation of US$ 16 billion by 2033, expanding at a 7.1% CAGR from 2023 to 2033.

Server virtualization is a cost-effective technique to deliver web hosting services while maximizing the use of current resources in IT infrastructure. Servers only utilize a small part of their computing power without server virtualization. Since the workload is allocated to only a subset of the network's servers, servers lie idle. Data centers get overloaded with underutilized servers, resulting in resource and power waste.

Download Sample Copy of This Report: https://www.factmr.com/connectus/sample?flag=S&rep_id=8241

Report Attributes

Details

Historical Data

2023 - 2032

Value Projection (2032)

US$ 16Billion

Growth Rate (2022-2032)

7.1 % CAGR

No. of Pages

170 pages

No. of Tables

80 Tables

No. of Figures

227 Figures

Key Takeaways from Market Study

The global server virtualization software market amounted to US$ 8 billion in 2023.

The market is predicted to evolve at a CAGR of 7.1% during the forecast period (2023 to 2033).

Revenue from the sales of server virtualization software is expected to reach US$ 16 billion by 2033.

The United States market was worth US$ 2.6 billion in 2022.

The OS-level virtualization segment is projected to increase at a CAGR of 5.5% from 2023 to 2033.

Story continues

Get Customization on this Report for Specific Research Solutions: https://www.factmr.com/connectus/sample?flag=RC&rep_id=8241

Competitive Landscape

The market for server virtualization software is highly competitive. Key players in the server virtualization software market are using various development techniques such as mergers and acquisitions, collaboration, and product launches to increase their market share and consumer base.

VMware, Inc., a prominent cloud computing and virtualization technology company based in the United States, confirmed the continuation of its partnership with IT behemoth Microsoft Corp. in August 2022. The partnership sought to provide enterprise accessibility to multi-cloud services in Microsoft Azure using VMware vSphere. Azure VMware Solution was introduced as part of VMware Cloud Universal to give customers a cost-effective and versatile cloud solution.

In May 2022, Red Hat, Inc., a leading US-based software company, partnered with US Department of Energy (DOE) laboratories to develop cloud environment standards in high-performance computing (HPC). The partnership sought to provide solutions for the efficient operation of ML, AI, and DL-based HPC workloads.

Alibaba Cloud, a subsidiary of Alibaba Group, announced 'Yitian 710' server chips for usage in its data centers in October 2021. Alibaba Cloud also revealed the creation of its proprietary servers, dubbed 'Panjiu,' which will be powered by these chips. The combination is anticipated to boost cloud services by lowering energy consumption and increasing computing performance.

Google LLC revealed an expansion of its Chrome Enterprise Recommended partner program to install the Chrome Operating System (OS) in contact centers in September 2021. The expansion is intended to provide a variety of benefits, including certified contact-center solutions, a secure platform and remote management, and access to virtualization desktop infrastructure.

Key Companies Profiled

Amazon.Com, Inc

Hewlett-Packard Co,

Broadcom Inc

IBM Corp

Capgemini SE

Cisco Systems, Inc

Citrix Systems Inc

Dell Inc

Microsoft Corporation

Regional Analysis

North America is expected to dominate the global server virtualization software market during the forecast period. The regional market is projected to be fueled by increased adoption of server virtualization, technological advancements, and increased investments in cloud-based services.

Moreover, the United States is leading the North American market due to the presence of major global information technology and telecommunications businesses such as VMware, IBM Corporation, and Cisco Systems, Inc.

Get Full Access of Complete Report: https://www.factmr.com/checkout/8241

Segmentation of Server Virtualization Software Industry Research

By Type :

OS-level Virtualization

Para Virtualization

Full Virtualization

By Deployment :

By Organization :

By End Use :

By Region :

North America

Latin America

Europe

APAC

MEA

More Valuable Insights on Offer

Fact.MR, in its new offering, presents an unbiased analysis of the global server virtualization software market, presenting historical demand data (2018-2022) and forecast statistics for the period of 2023-2033.

The study divulges essential insights on the market on the basis of type (OS-level virtualization, para virtualization, full virtualization), deployment (cloud, on-premise), organization (small & medium enterprises, large enterprises), and end use (BFSI, healthcare, IT & telecommunication, government & public sector, transportation & logistics, manufacturing, others), across five major regions (North America, Europe, Asia Pacific, Latin America, and MEA).

Check out more related studies published by Fact.MR Research:

Virtualization Software Market: Worldwide demand for virtualization software is expected to skyrocket at a CAGR of 22.3% from 2023 to 2033. Currently, the global virtualization software market is valued at US$ 40 billion and is anticipated to climb to a size of US$ 300 billion by 2033.

Network Function Virtualization (NFV) Market: The network function virtualization market share is estimated to reach a value of nearly US$ 7.8 Billion by 2032, from US$ 3.9 Billion in 2021. The network function virtualization (NFV) market is predicted to grow at a moderate CAGR of 6.6% during the forecast period.

Serial Device Server Market: Serial Device Server Market Increasing Demand of cost effective solutions is key driving factors for revenue growth: Global Industry Analysis and Opportunity Assessment, 2018-2027.

Server Station Market: Server station market has grown significantly in past few years with the continuous surge in digitization. Information & Technology sector makes an essential contribution to economic development and growth of a country.

Cloud Computing Market: The global cloud computing market size is estimated to secure a market value of US$ 482 Bn in 2022. The market is expected to procure US$ 1,949 Bn by 2032 while expanding at a CAGR of 15% during the forecast period from 2022 to 2032.

About Fact.MRWe are a trusted research partner of 80% of fortune 1000 companies across the globe. We are consistently growing in the field of market research with more than 1000 reports published every year. The dedicated team of 400-plus analysts and consultants is committed to achieving the utmost level of our clients satisfaction.

Contact:US Sales Office11140Rockville PikeSuite 400Rockville, MD20852United StatesTel: +1 (628) 251-1583, +353-1-4434-232E:sales@factmr.comFollow Us:LinkedIn|Twitter|Blog

Read more:

Demand for Server Virtualization Software Rises as Cloud and OS Technologies Proliferate: Fact.MR Exclusive Analysis - Yahoo Finance

Categories
Cloud Hosting

Sabre CIO on the impact of cloud in travel – PhocusWire

Aviation companies and the travel industry more widely have accepted that cloud technologies will play a huge role in their businesses going forward.

In recent months, Boeing and American Airlines have talked about their cloud developments with Google and Microsoft while technology players in travel including Mews and Spotnana have highlighted their cloud developments.

Global distribution giants Amadeus, Sabre and Travelport have also stressed the importance of the cloud in terms of bringing costs down, driving efficiency and enabling more rapid development and deployment of products and services.

Sabre said in an earnings call last year that its technology milestones for 2022 were to "exit our Sabre-managed data centers and migrate to the Google Cloud" as well as to bring the customer reservations database to Google Cloud.

Joe DiFonzo, chief information officer at Sabre has lived and breathed large scale technology transformation and system evolution for the past 25 years, predominantly in the telecommunications sector.

He talks to PhocusWire about where Sabre and the wider industry is in its cloud journey, the benefits of the technology and the next steps. His comments have been edited for brevity.

After 25 years in telecoms with responsibility for evolving systems and platforms from mainframe to open systems and then cloud computing, its funny to see that coming into Sabre, there are a lot of the same characteristics.

It was obvious right away that there were issues to deal with in terms of scale and performance and economics. Our cost of operations was high compared to what Ive seen in other places. Also, our ability to evolve the business quickly was lagging. We were still very old school in the way we were developing, operating and deploying our software and it was making us slower than customers needed.

We started on the path in late 2017 with a big program to bring all our systems to the cloud and evolve our mode of operation... and we are well on that journey now.By late 2019, after the efforts of two years, we did a cost benefit analysis of the multi-cloud vendor approach and decided that, if we could find the right partner, there would be more benefit in focusing on a single provider.

One of the things overlooked when go down the multi-cloud vendor path is there is that when you get into what they call their platform services - databases, messaging protocols, encryption technology, security and things like AI and Big Data - it gets very unique. But, thats really where the secret sauce is, thats where were going to get a lot of value.

Subscribe to our newsletter below

Google was the one most aligned with our way of thinking - its very engineering focused and very B2B focused. We also saw that its big data and machine learning capabilities would be really strategic for our business specifically.

This where Ill give a lot of credit to our senior leadership team, our board and specifically, our CEO Sean Menke. He dug in and said this is the most important thing the company is doing, and were going to suffer going through this pandemic as a lot of businesses in the travel space are but were not going to let up on the technology transformation effort. And, I can attest to that, we did not let up. As a matter of fact, we doubled down and worked harder than ever and found more resources than ever through that effort in 2020, 2021 and into 2022 to make the progress we had promised.

At the end of 2022 we closed every Sabre-operated data center on the planet. There were a total of 14 when we started. By the end of 2023, we will close out operations in the DXC data center in Tulsa, including one which has been operating since 1972.At the end of 2023, we will still have a couple of mainframes running, one for primary service and one for disaster recovery, but the workload on them will be much lower.

By the end of this year, we will have completed the development effort to get at least all of the GDS functionality off mainframe and into the cloud. In addition, a lot of passenger service system capabilities will be in the cloud, such as ticketing and check-in services. Another big element were working on is the passenger name record (PNR). Were in the process of offloading all of the PNRs into the cloud so over time the footprint of whats on the mainframe is shrinking.

Were starting to see the financial benefits of whats going on. Were seeing our hosting cost reductions as anticipated but the bigger thing is were starting to see real efficiency gains in day-to-day operations and our software development efforts.One of the things that has happened is that development teams have a lot more autonomy in what they do every day. We have techniques we can employ now because of the cloud that provide a lot more stability and security than we have ever had in our environment before.

In the old days every time something needed to be deployed or new products launched, we would have to buy hardware, install it, configure it and network it. Every change would have to be manually performed. It was not only time consuming but operationally risky because there were a lot of places where you could make a mistake.

Moving to the cloud, there is this notion of infrastructure as code so instead of developers saying they need X and Y, they essentially program it in a language called Terraform and deploy it through an automated deployment pipeline. They can do it in minutes and because of the cloud capability to dynamically create infrastructure we can do blue-green deployments where we have the existing system running and materialize a new copy of the system, make sure its running property, cut the customer connections over to that and delete the old copy.

Changes that used to take days and weeks happen in minutes and hours.

We have all these capabilities we didnt have before such as new database and communications technology. But, probably the biggest game-changer is a technology called Vertex AI, which is Googles AI/machine learning platform.

Youve probably heard us talk about Sabres travel AI capabilities. They are a series of individual microservices with each one very focused on a specific task, e.g. optimizing ancillary offers or hotel recommendations for agencies. We basically built those things using a combination of cloud infrastructure and ML models and the ML training infrastructure. So, we can turn these things around very quickly and use all the data Sabre has collected and get these capabilities to market very quickly. Were now getting customers in and whipping up a real working prototype, with real data that is close to production, ready for them to try.

We are already rolling out things like Travel AI and Intelligent Retailing. With the capabilities, and with Google, were cycling faster and faster and seeing that flywheel effect in terms of how quickly we can roll out these products.

We have all these capabilites we didnt have before such as new database and communications technology. But, probably the biggest game-changer is a technology called Vertex AI which is Googles AI/machine learning platform.

Joe DiFonzo - Sabre

Now we get to the next level from a technical perspective in terms of how our developers develop and deliver software. Its not only about getting these new products to market but also delivering new features and functionality for existing products more quickly and safely than ever before. Saving money is good but the bigger thing is generating new revenue and being an efficient and effective software developer.

Its a huge challenge. We have clear guidelines on what were allowed and not allowed to do to ensure customers are not impacted and, if they are, that its minimal impact.

All of these migrations have been in a mode where we have a version of the system in the cloud and a version in the non-cloud. We have a very involved transition process. For a long time it runs in both until eventually its only in the cloud. All these things are carefully timed, staged and tested all the way through because the objective to not impact customers. If possible, theyre not even aware that weve completed these migrations.

Those PNRs are going to have to be around for a while. Even if you look at whats going on in the industry, if you look at the predictions from IATA, youre talking about probably 2030 before everybody has migrated to that new model. Some would say thats a rosy prediction. So, were going to be dealing with legacy for a while and essentially PNRs will become a legacy thing instead of the primary thing as we move to order-based systems. But, now we have the PNRs in the cloud, its easier to integrate our offer-order capability because theyll all be cloud hosted and the cloud APIs are much easier to deal with.

I think were seeing much more openness. Many of our customers were watching very closely what we were doing, what we were going through and what kind of success we were having before they were ready to jump. I would definitely say more and more companies in travel are looking at cloud because it boils down to that its the right way to do computing.

Originally posted here:

Sabre CIO on the impact of cloud in travel - PhocusWire

Categories
Cloud Hosting

cPanel Partners With CloudFest to Bring CloudFest USA Back to … – InvestorsObserver

AUSTIN, Texas, Jan. 19, 2023 (GLOBE NEWSWIRE) -- cPanel L.L.C., the Hosting Platform of Choice, a WebPros portfolio company, has been announced as the Exclusive Title Sponsor for CloudFest USA 2023.

Taking place in Austin, Texas, from May 31 through June 1, CloudFest is the industry's leading conference for cloud, hosting, and internet service providers. By partnering with cPanel, CloudFest has ensured that attendees will experience a one-of-a-kind conference with equal parts education, information, networking, and entertainment.

"CloudFest EU is something we look forward to every year," said Jesse Asklund, Chief Experience Officer at WebPros. "We are incredibly honored and excited to be the Exclusive Title Sponsor for CloudFest USA in its first year back in the states and are looking forward to bringing back core components of past cPanel Conferences."

With an agenda focused on sessions led by industry leaders and experts, as well as the showcasing of new products from cPanel and other flagship WebPros brands within the portfolio, CloudFest serves to position attendees for success in 2023 and beyond.

Super early bird pricing lasts through January 20.

For more information about CloudFest 2023, visit the event website at https://www.cloudfest.com/usa/.

About WebPros

WebPros provides some of the most widely used web-based digitalization solutions. WebPros encompasses comprehensive hosting and server management platforms cPanel and Plesk, automation platform WHMCS, infrastructure management SolusVM, server monitoring platform NIXStats, Koality performance software, web builder platform Sitejet, and SEO suite XOVI. Under the WebPros canopy, these independent companies are continually exploring synergies in pursuit of responding to the challenges of web professionals everywhere. For more information, visit http://www.webpros.com .

About cPanel, L.L.C.

Acquired by WebPros in 2019, cPanel provides one of the Internet industry's most reliable and intuitive web hosting automation software platforms. With its rich feature set and customer-first support, the fully-automated hosting server management platform empowers infrastructure providers and gives customers the ability to administer every aspect of their website using simple point-and-click software. Based in Houston, TX, cPanel employs over 260 team members and has customers in more than 70 countries.

"cPanel" and "cPanel & WHM" are registered trademarks of cPanel, L.L.C.

Contact Information: Sean Melton Vice President of Marketing sean.melton@webpros.com (346) 855-4093

Related Images

Image 1

CloudFest USA 2023

This content was issued through the press release distribution service at Newswire.com .

The rest is here:

cPanel Partners With CloudFest to Bring CloudFest USA Back to ... - InvestorsObserver

Categories
Cloud Hosting

Basecamp details ‘obscene’ $3.2 million bill that caused it to quit the cloud – The Register

David Heinemeier Hansson, CTO of 37Signals which operates project management platform Basecamp and other products has detailed the colossal cloud bills that saw the outfit quit the cloud in October 2022.

The CTO and creator of Ruby On Rails did all the sums and came up with an eye-watering cloud bill for $3,201,564 in 2022 or $266,797 each month.

Plenty of that spend $759,983 went on compute, in the form of Amazon Web Services' EC2 and EKS services.

On Twitter, Hansson contrasted that cost with the spend needed to acquire servers packing 288 vCPUs and plenty more besides over three years.

Hansson was at pains to point out that even that bill was the result of a concerted effort to keep it low.

"Getting this massive spend down to just $3.2 million has taken a ton of work. The ops team runs a vigilant cost-inspection program, with monthly reporting and tracking, and we've entered into long-term agreements on Reserved Instances and committed usage, as part of a Private Pricing Agreement," he wrote. "This is a highly optimized budget."

But it's also a budget he thinks could be "dramatically cut" with a move to owned Dell hardware and managed hosting from an outfit called Deft.

Hansson revealed that the business's single biggest cloudy line item is $907,837.83 to store over eight petabytes of data in AWS's Simple Storage Service (S3).

"It's worth noting that this setup uses a dual-region replication strategy, so we're resilient against an entire AWS region disappearing, including all the availability zones," he added. He also pointed out that 37Signals spent $66,742 ($5,562/month) on AWS's CloudFront content delivery service to move the data out of the cloud.

The CTO didn't detail how, or if, using less cloud will let 37Signals achieve the same resilience it enjoys in AWS. But he promised to repeat the accounting exercise in public next year, to reveal how the enterprise saves money.

The Register has already set a reminder to check for this year's bill in early 2024.

Read the original post:

Basecamp details 'obscene' $3.2 million bill that caused it to quit the cloud - The Register

Categories
Cloud Hosting

Microsoft set to make 5% of workforce redundant – Information Age

American software giant Microsoft to make 11,000 of its workforce redundant partly as a result of reduced demand for its Azure cloud hosting platform

Microsoft is set to make 5 per cent of its workforce redundant, affecting about 11,000 employees, according to Sky News, as big tech reacts to weakening demand.

As the economy has slowed, so too has the money that some businesses are willing to spend on expensive software subscriptions. That said, global spending on software is still set to rise by 9.3 per cent in 2023 to $856bn, according to Gartner.

The American software giant, based in Redmond, Washington, has a global workforce of some 220,000 employees, 6,000 of them in the UK.

Microsoft is planning to lay off employees in several engineering divisions from today, Bloomberg News reported.

It was unclear whether or how many UK-based positions might be affected.

Microsoft warned in October of a slowdown in its cloud computing business, an acknowledgement that corporate customers were re-evaluating spending in response to the economic downturn.

In a world facing increasing headwinds, digital technology is the ultimate tailwind, Satya Nadella, Microsofts chairman and chief executive, said in October.

Microsoft is only the latest big tech firm to make layoffs as technology firms struggle with reduced demand following the pandemic tech boom.

Cloud software provider Salesforce which employs more than 2,500 people in Britain said it would cut 8,000 jobs. Marc Benioff admitted to overhiring in the pandemic, which has squeezed the bottom line. In the three months to October, sales surged 14 per cent to $7.8 billion, but profits more than halved to $210m.

Meanwhile Meta Platforms, owner of Facebook and Instagram, is reducing its workforce by around 11,000 jobs.

According to tracking website Layoffs.fyi, tech companies laid off more than 154,000 employees last year. An estimated 26,000 employees have been laid off since the start of this year.

Over 150,000 employees laid off by tech companies in 2022 Analysis from Layoffs.fyi has found that 153,160 members of staff at tech companies were laid off across 2022, the highest amount since the dotcom bubble burst

The highest average tech job salaries in the UK revealed IT industry researchers TechShielder has revealed the highest average tech job salaries currently offered across the UK, using Indeeds Salary Guide tool

Read more here:

Microsoft set to make 5% of workforce redundant - Information Age

Categories
Cloud Hosting

Who Owns the Generative AI Platform? – Andreessen Horowitz

Were starting to see the very early stages of a tech stack emerge in generative artificial intelligence (AI). Hundreds of new startups are rushing into the market to develop foundation models, build AI-native apps, and stand up infrastructure/tooling.

Many hot technology trends get over-hyped far before the market catches up. But the generative AI boom has been accompanied by real gains in real markets, and real traction from real companies. Models like Stable Diffusion and ChatGPT are setting historical records for user growth, and several applications have reached $100 million of annualized revenue less than a year after launch. Side-by-side comparisons show AI models outperforming humans in some tasks by multiple orders of magnitude.

So, there is enough early data to suggest massive transformation is taking place. What we dont know, and what has now become the critical question, is: Where in this market will value accrue?

Over the last year, weve met with dozens of startup founders and operators in large companies who deal directly with generative AI. Weve observed that infrastructure vendors are likely the biggest winners in this market so far, capturing the majority of dollars flowing through the stack. Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. And most model providers, though responsible for the very existence of this market, havent yet achieved large commercial scale.

In other words, the companies creating the most value i.e. training generative AI models and applying them in new apps havent captured most of it. Predicting what will happen next is much harder. But we think the key thing to understand is which parts of the stack are truly differentiated and defensible. This will have a major impact on market structure (i.e. horizontal vs. vertical company development) and the drivers of long-term value (e.g. margins and retention). So far, weve had a hard time finding structural defensibility anywhere in the stack, outside of traditional moats for incumbents.

We are incredibly bullish on generative AI and believe it will have a massive impact in the software industry and beyond. The goal of this post is to map out the dynamics of the market and start to answer the broader questions about generative AI business models.

To understand how the generative AI market is taking shape, we first need to define how the stack looks today. Heres our preliminary view.

The stack can be divided into three layers:

Its important to note: This is not a market map, but a framework to analyze the market. In each category, weve listed a few examples of well-known vendors. We havent made any attempt to be comprehensive or list all the amazing generative AI applications that have been released. Were also not going deep here on MLops or LLMops tooling, which is not yet highly standardized and will be addressed in a future post.

In prior technology cycles, the conventional wisdom was that to build a large, independent company, you must own the end-customer whether that meant individual consumers or B2B buyers. Its tempting to believe that the biggest companies in generative AI will also be end-user applications. So far, its not clear thats the case.

To be sure, the growth of generative AI applications has been staggering, propelled by sheer novelty and a plethora of use cases. In fact, were aware of at least three product categories that have already exceeded $100 million of annualized revenue: image generation, copywriting, and code writing.

However, growth alone is not enough to build durable software companies. Critically, growth must be profitable in the sense that users and customers, once they sign up, generate profits (high gross margins) and stick around for a long time (high retention). In the absence of strong technical differentiation, B2B and B2C apps drive long-term customer value through network effects, holding onto data, or building increasingly complex workflows.

In generative AI, those assumptions dont necessarily hold true. Across app companies weve spoken with, theres a wide range of gross margins as high as 90% in a few cases but more often as low as 50-60%, driven largely by the cost of model inference. Top-of-funnel growth has been amazing, but its unclear if current customer acquisition strategies will be scalable were already seeing paid acquisition efficacy and retention start to tail off. Many apps are also relatively undifferentiated, since they rely on similar underlying AI models and havent discovered obvious network effects, or data/workflows, that are hard for competitors to duplicate.

So, its not yet obvious that selling end-user apps is the only, or even the best, path to building a sustainable generative AI business. Margins should improve as competition and efficiency in language models increases (more on this below). Retention should increase as AI tourists leave the market. And theres a strong argument to be made that vertically integrated apps have an advantage in driving differentiation. But theres a lot still to prove out.

Looking ahead, some of the big questions facing generative AI app companies include:

What we now call generative AI wouldnt exist without the brilliant research and engineering work done at places like Google, OpenAI, and Stability. Through novel model architectures and heroic efforts to scale training pipelines, we all benefit from the mind-blowing capabilities of current large language models (LLMs) and image-generation models.

Yet the revenue associated with these companies is still relatively small compared to the usage and buzz. In image generation, Stable Diffusion has seen explosive community growth, supported by an ecosystem of user interfaces, hosted offerings, and fine-tuning methods. But Stability gives their major checkpoints away for free as a core tenet of their business. In natural language models, OpenAI dominates with GPT-3/3.5 and ChatGPT. But relatively few killer apps built on OpenAI exist so far, and prices have already dropped once.

This may be just a temporary phenomenon. Stability is a new company that hasnt focused yet on monetization. OpenAI has the potential to become a massive business, earning a significant portion of all NLP category revenues as more killer apps are built especially if their integration into Microsofts product portfolio goes smoothly. Given the huge usage of these models, large-scale revenues may not be far behind.

But there are also countervailing forces. Models released as open source can be hosted by anyone, including outside companies that dont bear the costs associated with large-scale model training (up to tens or hundreds of millions of dollars). And its not clear if any closed-source models can maintain their edge indefinitely. For example, were starting to see LLMs built by companies like Anthropic, Cohere, and Character.ai come closer to OpenAI levels of performance, trained on similar datasets (i.e. the internet) and with similar model architectures. The example of Stable Diffusion suggests that if open source models reach a sufficient level of performance and community support, then proprietary alternatives may find it hard to compete.

Perhaps the clearest takeaway for model providers, so far, is that commercialization is likely tied to hosting. Demand for proprietary APIs (e.g. from OpenAI) is growing rapidly. Hosting services for open-source models (e.g. Hugging Face and Replicate) are emerging as useful hubs to easily share and integrate models and even have some indirect network effects between model producers and consumers. Theres also a strong hypothesis that its possible to monetize through fine-tuning and hosting agreements with enterprise customers.

Beyond that, though, there are a number of big questions facing model providers:

Nearly everything in generative AI passes through a cloud-hosted GPU (or TPU) at some point. Whether for model providers / research labs running training workloads, hosting companies running inference/fine-tuning, or application companies doing some combination of both FLOPS are the lifeblood of generative AI. For the first time in a very long time, progress on the most disruptive computing technology is massively compute bound.

As a result, a lot of the money in the generative AI market ultimately flows through to infrastructure companies. To put some very rough numbers around it: We estimate that, on average, app companies spend around 20-40% of revenue on inference and per-customer fine-tuning. This is typically paid either directly to cloud providers for compute instances or to third-party model providers who, in turn, spend about half their revenue on cloud infrastructure. So, its reasonable to guess that 10-20% of total revenue in generative AI today goes to cloud providers.

On top of this, startups training their own models have raised billions of dollars in venture capital the majority of which (up to 80-90% in early rounds) is typically also spent with the cloud providers. Many public tech companies spend hundreds of millions per year on model training, either with external cloud providers or directly with hardware manufacturers.

This is what wed call, in technical terms, a lot of money especially for a nascent market. Most of it is spent at the Big 3 clouds: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. These cloud providers collectively spend more than $100 billion per year in capex to ensure they have the most comprehensive, reliable, and cost-competitive platforms. In generative AI, in particular, they also benefit from supply constraints because they have preferential access to scarce hardware (e.g. Nvidia A100 and H100 GPUs).

Interestingly, though, we are starting to see credible competition emerge. Challengers like Oracle have made inroads with big capex expenditures and sales incentives. And a few startups, like Coreweave and Lambda Labs, have grown rapidly with solutions targeted specifically at large model developers. They compete on cost, availability, and personalized support. They also expose more granular resource abstractions (i.e. containers), while the large clouds offer only VM instances due to GPU virtualization limits.

Behind the scenes, running the vast majority of AI workloads, is perhaps the biggest winner in generative AI so far: Nvidia. The company reported $3.8 billion of data center GPU revenue in the third quarter of its fiscal year 2023, including a meaningful portion for generative AI use cases. And theyve built strong moats around this business via decades of investment in the GPU architecture, a robust software ecosystem, and deep usage in the academic community. One recent analysis found that Nvidia GPUs are cited in research papers 90 times more than the top AI chip startups combined.

Other hardware options do exist, including Google Tensor Processing Units (TPUs); AMD Instinct GPUs; AWS Inferentia and Trainium chips; and AI accelerators from startups like Cerebras, Sambanova, and Graphcore. Intel, late to the game, is also entering the market with their high-end Habana chips and Ponte Vecchio GPUs. But so far, few of these new chips have taken significant market share. The two exceptions to watch are Google, whose TPUs have gained traction in the Stable Diffusion community and in some large GCP deals, and TSMC, who is believed to manufacture all of the chips listed here, including Nvidia GPUs (Intel uses a mix of its own fabs and TSMC to make its chips).

Infrastructure is, in other words, a lucrative, durable, and seemingly defensible layer in the stack. The big questions to answer for infra companies include:

Of course, we dont know yet. But based on the early data we have for generative AI, combined with our experience with earlier AI/ML companies, our intuition is the following.

There dont appear, today, to be any systemic moats in generative AI. As a first-order approximation, applications lack strong product differentiation because they use similar models; models face unclear long-term differentiation because they are trained on similar datasets with similar architectures; cloud providers lack deep technical differentiation because they run the same GPUs; and even the hardware companies manufacture their chips at the same fabs.

There are, of course, the standard moats: scale moats (I have or can raise more money than you!), supply-chain moats (I have the GPUs, you dont!), ecosystem moats (Everyone uses my software already!), algorithmic moats (Were more clever than you!), distribution moats (I already have a sales team and more customers than you!) and data pipeline moats (Ive crawled more of the internet than you!). But none of these moats tend to be durable over the long term. And its too early to tell if strong, direct network effects are taking hold in any layer of the stack.

Based on the available data, its just not clear if there will be a long-term, winner-take-all dynamic in generative AI.

This is weird. But to us, its good news. The potential size of this market is hard to grasp somewhere between all software and all human endeavors so we expect many, many players and healthy competition at all levels of the stack. We also expect both horizontal and vertical companies to succeed, with the best approach dictated by end-markets and end-users. For example, if the primary differentiation in the end-product is the AI itself, its likely that verticalization (i.e. tightly coupling the user-facing app to the home-grown model) will win out. Whereas if the AI is part of a larger, long-tail feature set, then its more likely horizontalization will occur. Of course, we should also see the building of more traditional moats over time and we may even see new types of moats take hold.

Whatever the case, one thing were certain about is that generative AI changes the game. Were all learning the rules in real time, there is a tremendous amount of value that will be unlocked, and the tech landscape is going to look much, much different as a result. And were here for it!

All images in this post were created using Midjourney.

Read the rest here:

Who Owns the Generative AI Platform? - Andreessen Horowitz

Categories
Cloud Hosting

3 Warren Buffett Stocks That Could Soar 33% to 80% in 2023 … – The Motley Fool

Warren Buffett handily beat the S&P 500 in 2022, thanks in large part to strong performances from a couple of oil stocks in Berkshire Hathaway's (BRK.A -0.92%) (BRK.B -1.00%) portfolio. But analysts don't expect those two oil stocks will have as much room to run this year.

If the analysts are right, Buffett will need help from other stocks to keep up his market-beating ways. He could be in luck, to some extent. Here are three Buffett stocks that could soar 33% to 80% in 2023, according to Wall Street.

Snowflake (SNOW -3.48%) stood among Buffett's biggest losers last year. Shares of the cloud-based analytics software provider plunged nearly 58% in 2022. However, the consensus Wall Street price target for the stock reflects an upside potential of 33%.

Piper Sandleranalyst Brent Bracelin recently lowered his price target for Snowflake and yet the stock still jumped. Why? Although Bracelin predicts slower growth for the cloud computing market in 2023, he thinks that Snowflake will grow faster than the overall market. Investors viewed his 3% downward revision to the price target for the stock as a positive in light of the context.

This relative optimism appears to be justified. Snowflake continues to deliver impressive revenue growth, despite facing tough macroeconomic headwinds. The company is even hiring, while many tech companies are downsizing.

Berkshire Hathaway's stake in Snowflake is too small right now to move the needle very much, though. It's doubtful that Buffett or his investment managers will significantly increase their position in light of Snowflake's still-lofty valuation.

You can lumpAmazon (AMZN -1.86%) into the category of Buffett's big losers of 2022, as well. Shares of the e-commerce and cloud-hosting giant tanked nearly 50% last year.

But Amazon is off to a good start in 2023. Analysts think it could go much higher, with the consensus price target 42% above the current share price.

The optimism about Amazon extends beyond Wall Street. Famous (and highly successful) investor Bill Miller, head of Miller Value Partners, thinks that the stock is a no-brainer buy. He recently told CNBC that Amazon is "one of the easiest names in the market right now." Miller believes that Amazon Web Services alone justifies the company's current market cap.

Miller could be right. Amazon's steep sell-off last year stemmed primarily from investors' reaction to the company's slowing growth. However, that sluggishness was due mainly to economic uncertainty. Amazon's long-term prospects remain strong.

As is the case with Snowflake, Buffett won't receive a tremendous benefit, even if Amazon stock skyrockets this year. The stock makes up only 0.3% of Berkshire's portfolio. But it wouldn't be shocking if investment managers Todd Combs and/or Ted Weschler convince Buffett to buy more shares of Amazon.

It's kind of a same-song-different-verse story with Nu Holdings (NU -0.55%). The fintech stock plummeted nearly 57% last year. However, Wall Street has high expectations for Nu, with a price target reflecting an upside potential of around 80%.

Nu's digital-banking platform serves over 70 million customers in Brazil, Colombia, and Mexico, and business is booming for the Latin American fintech leader. The company's revenue skyrocketed 171% year over year to $1.3 billion in Q3 -- an all-time high.

The recent political turmoil in Brazil shouldn't impact Nu very much. Nu founder and CEO David Vlez noted in the company's Q3 call that the company has grown throughout its history, despite seeing the biggest recession in Brazilian history in 2015 and 2016, a presidential impeachment, and both left-leaning and right-leaning political parties gaining power in the countries where it operates.

If Nu does soar this year, will it go a long way toward helping Buffett beat the market? Nope. Berkshire has a smaller position in Nu than it does in Amazon and Snowflake. While these three stocks could be big winners for Buffett in 2023, the legendary investor will still need help from other stocks that Berkshire owns.

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Keith Speights has positions in Amazon.com and Berkshire Hathaway. The Motley Fool has positions in and recommends Amazon.com, Berkshire Hathaway, and Snowflake. The Motley Fool recommends the following options: long January 2023 $200 calls on Berkshire Hathaway, short January 2023 $200 puts on Berkshire Hathaway, and short January 2023 $265 calls on Berkshire Hathaway. The Motley Fool has a disclosure policy.

Follow this link:

3 Warren Buffett Stocks That Could Soar 33% to 80% in 2023 ... - The Motley Fool

Categories
Cloud Hosting

Earth Bogle: Campaigns Target the Middle East with Geopolitical … – Trend Micro

Earth Bogle: Campaigns Target the Middle East with Geopolitical Lures

Malware

We discovered an active campaign ongoing since at least mid-2022 which uses Middle Eastern geopolitical-themed lures to distribute NjRAT (also known as Bladabindi) to infect victims across the Middle East and North Africa.

By: Peter Girnus, Aliakbar ZahraviJanuary 17, 2023Read time: ( words)

While threat hunting, we found an active campaign using Middle Eastern geopolitical themes as a lure to target potential victims in the Middle East and Africa. In this campaign we have labeled Earth Bogle, the threat actor uses public cloud storage services such as files.fm and failiem.lv to host malware, while compromised web servers distribute NjRAT.

NjRAT (also known as Bladabindi) is a remote access trojan (RAT) malware first discovered in 2013. It is primarily used to gain unauthorized access and control over infected computers and has been used in various cyberattacks to target individuals and organizations in the Middle East. Users and security teams are recommended to keep their systems security solutions updated and their respective cloud infrastructures properly secured to defend against this threat.

Routine

The malicious file is hidden inside a Microsoft Cabinet (CAB) archive file masquerading as a sensitive audio file, named using a geopolitical theme as a lure to entice victims to open it. The distribution mechanism could be via social media (Facebook and Discord appear to be favored among these campaigns), file sharing (OneDrive), or a phishing email. The malicious CAB file contains an obfuscated VBS (Virtual Basic Script) dropper responsible for delivering the next stage of the attack.

Once the malicious CAB file is downloaded, the obfuscated VBS script runs to fetch the malware from a compromised or spoofed host. It then retrieves a PowerShell script responsible for injecting NjRat into the compromised victims machine.

Use of Middle Eastern Geopolitical Themes as Lures

The initial CAB files have exceptionally low detection rates on Virus Total (SHA256: a7e2b399b9f0be7e61977b51f6d285f8d53bd4b92d6e11f74660791960b813da and 4985b6e286020de70f0b74d457c7e387463ea711ec21634e35bc46707dfe4c9b), which allows the attackers to remain undetected and spread their attack across the region. The group behind the campaign uses public cloud hosting services to host malicious CAB files and uses themed lures to entice Arabic speakers into opening the infected file.

One of the malicious CAB files filename translates to A voice call between Omar, the reviewer of the command of Tariq bin Ziyads force, with an Emirati officer.cab. The attacker uses the lure of a supposedly sensitive voice call between an Emirati military officer and a member of the Tariq bin Ziyad (TBZ) Militia, a powerful Libyan faction. The file lures victims in the region into opening the file by insinuating a false link between the UAE and a group associated with war crimes, appealing to political interests and emotional appeals. These lures are consistent with a campaign disclosed in December 2022 that used Facebook advertisements on spoofed Middle Eastern news outlets pages, which were shared and pushed to other users by unsuspecting mules.

This malicious CAB file contains an obfuscated VBS script that functions as the agent responsible for delivering the next payload. When a victim opens the malicious CAB file and runs the VBS file, the second stage payload is retrieved.

Delivering the PowerShell Payload

The second stage payload is an obfuscated VBS script file masquerading as an image file (SHA256: 6560ef1253f239a398cc5ab237271bddd35b4aa18078ad253fd7964e154a2580). When this malicious file is run, a malicious PowerShell script is retrieved.

The domain delivering the malicious PowerShell script is an infected or spoofed host with documented affiliations with the Libyan Army, and a quick check on the domain gpla[.]gov[.]ly shows it was registered in 2019.

Similar campaigns have suggested the creation, use, and abuse of fake social media accounts claiming to belong to reputable organizations to serve advertisements with links to public cloud sharing platforms which contain malicious payloads to unsuspecting victims. This allows the threat actors to:

We also noted that the domain gpla[.]gov[.]ly has a history of compromise going back to at least 2021.

Second stage Dropper Overview

The second stage dropper (SHA256: 78ac9da347d13a9cf07d661cdcd10cb2ca1b11198e4618eb263aec84be32e9c8) is an obfuscated PowerShell script that drops five files in total: two binaries, a VBS script, a PowerShell script, and a Windows batch script.

Each module has the following functionality:

Upon execution, the second stage dropper kills the following .NET-related processes on the infected system. After which, KxFXQGVBtB.ps1 executes the aspnet_compler.exe in conjunction with the process injector to inject NjRAT.

[Reflection.Assembly]::Load($MyS).GetType('NewPE2.PE').'GetMethod'('Execute').Invoke($null,{[OBJECT[]]}, ($JKGHJKHGJKJK,$serv));

The dropper further drops "rYFFCeKHlIT.bat" in C:UsersPublic and creates a directory called "WindowsHost" in C:ProgramData to store the VBScript file "gJhkEJvwBCHe.vbs". On deobfuscation, gJhkEJvwBCHe.vbs runs the rYFFCeKHlIT.bat file, responsible for executing another PowerShell script called "KxFXQGVBtb.ps1" that contains a bypass PowerShell execution policy flag.

"KxFXQGVBtB.ps1" is the final PowerShell dropper responsible for loading the NjRAT binary into memory and injecting it into the legitimate .NET binary file called "aspnet_compiler.exe" via the process injector. The PowerShell script uses the [Reflection.Assembly]::Load" method to load the process injector ($Mys) into the memory. It then invokes a method called 'Execute' with two parameters. The first parameter is a full path to the PEfile to inject ("C:WindowsMicrosoft.NETFrameworkaspnet_compiler.exe"), and the second parameter is the primary payload NjRAT ($serv).

The following snippet demonstrates the process injector functions. The file has been obfuscated via SmartAssembly:

The final payload of this campaign is NjRAT, allowing attackers to conduct a myriad of intrusive activities on infected systems such as stealing sensitive information, taking screenshots, getting a reverse shell, process, registry and file manipulation, uploading/downloading files, and performing other operations.

The dropper achieves persistence on an infected system by adding the directory C:ProgramDataWindowsHost to the "User Shell folders and "Shell folders to the startup keys accordingly.

Conclusion

This case demonstrates that threat actors will leverage public cloud storage as malware file servers, combined with social engineering techniques appealing to peoples sentiments such as regional geopolitical themes as lures, to infect targeted populations. Furthermore, governments weakened by regional conflict are at a higher risk for compromise, wherein threat actors and advanced persistent threat (APT) groups could compromise and use government infrastructure in targeted campaigns. This is compounded by the ability to share cloud storage content via advertising and social media, presenting an opportunity for threat actors and APT groups to reach a wider infection radius.

Organizations can protect themselves by remaining vigilant against phishing attacks andskeptical regarding sensational topics and themes abused online as lures. Users should be wary of opening suspicious archive files such as CAB files, especially from public sources where the risks of compromise are high. Security teams should be aware of the dynamic nature of conflict zones when considering a security posture. Organizations can also consider a cutting edge multilayered defensive strategy that can detect, scan, and block malicious URLs.

Indicators of Compromise (IOCs)

Download the full list of IOCs here.

Tags

sXpIBdPeKzI9PC2p0SWMpUSM2NSxWzPyXTMLlbXmYa0R20xk

Continue reading here:

Earth Bogle: Campaigns Target the Middle East with Geopolitical ... - Trend Micro