Categories
Cloud Hosting

MilesWeb: the future ahead is aligned towards massive growth – HostReview.com

An Exclusive Interview with Deepak Kori, Founder & Director, MilesWeb. The company has been just named one of the Most Innovative Web Hosting Companies in the USA in 2022.

MilesWeb was founded in 2012, with a strong vision to help businesses of every size, industry vertical and large enterprises succeed online with feature-rich, reliable and competitively priced web hosting service.

Looking back at that time, I still remember, we started in a tiny office space, in Nashik Maharashtra (India). Over the inception, weve evolved so much.

Its really an exciting time for us, standing proudly as the extremely renowned and leading Indian web hosting provider, stretching its roots in the international market.

Our portfolio includes a wide range of web hosting solutions including, shared hosting, WordPress hosting, reseller hosting, cloud hosting, VPS hosting, dedicated servers, WordPress cloud, SSL, domains and more.

With this, we promise a 99.95% uptime to keep our clients' sites online and 24/7 support to address their technical concerns.

Making it much much smoother, were committed to giving our clients absolutely everything they need to host and manage a website.

What makes your company innovative/unique? What are the key partnerships and involvements done to drive the innovation?

I believe, in the rapidly developing enterprise market, theres always a scope to innovate, and thats what we follow.

At MilesWeb, we focus on improving the overall web hosting experience of our customers. This includes optimized website speed, improvements in email delivery, premium site scanner, security software and more.

Of course, Customer support is another factor that sets us apart.

We provide 24/7 managed support via email, live chat, phone and tickets, and that too in regional languages, like English, Hindi and Marathi, offering the highest level of convenience.

Be it, our client faces a small or big technical issue, at night 3.am, or 4 pm in the afternoon our team is up for assistance. I think it's our support and professional service that customers prefer choosing us.

Despite this, weve made some strategic partnerships with the top players in the industry. Weve partnered with Yotta and Ctrls, the largest Tier-4 data centers in Asia.

And for managed cloud services, weve partnered with AWS, Digital Ocean, and Vultr. Indeed, together as partners we can sustain a significant impact.

And thats it I think!

Kindly elaborate on any upcoming projects/plans that benefit your customers.

Things are always busy around here. Were constantly investing in bringing significant features and capabilities, advancements in infrastructure, best-in-class technology, security tools and everything thats beneficial for our clients.

How do you see the company and the industry in the future ahead?

Of course, when it comes to us, were looking for an exciting future. To start with, India has more than 7.9 million micro, small and medium enterprises (MSMEs), as of 27 March, 2022. Exactly, were looking to help these industry segments go online with our reliable web hosting service.

No doubt, MilesWeb is thriving, & with the same pace we want to fill the gap, acquire the market and reach great heights of success.

In addition, the Indian web hosting market is growing very rapidly in terms of infrastructure. Meanwhile, a recent report says that, Over 45 data centers covering 13 million square feet and 1,015 MW of IT capacity are planned to set foot in India by 2025. This would be really a huge step considering the solid infrastructure demands.

I believe that the future ahead is definitely aligned towards massive growth.

Biggest achievements of your company.

I think the biggest achievement of our company so far is the dramatic improvement in the Customer Satisfaction Rate. The point is that, since the start, our focus on customer experience has always remained on the top.

Anything that users expect from a web hosting provider is top-of-the-line hosting service, prices that they can actually pay, security and quality customer support. This all doesnt work in isolation. It is the ultimate combo of success.

We thrive on delighting our customers with all these and a lot more. We have tried to keep our customers happy in each step of their online journey.

Mention some of the awards, achievements, recognitions, and clients feedback that you feel are notable and valuable for the company.

Well, to tell you frankly, we never do business expecting rewards in return. But, yes, whatever we do, we raise the bar of excellence!

And as a corollary to that, were much appreciated for unbeatable customer support, reliable hosting service and being the user's top choice.

Likewise, to date, weve won the Users Top Choice by Hostadvice, Enabler of the Year Award by Nayabharat Jagran Group, for helping SMBs mark their web presence, Great Support & Excellent Service by Hostadvice, Most Viewed Hosting Provider by Hosting Seekers and the badges go on.

Adding to that, we get stellar feedback from our clients each day for addressing their technical concerns and assisting them with a smooth online experience. Weve received more than 10000+ positive reviews from our valuable customers on different trusted review platforms.

I must say, this feedback especially motivates our support associates and remains as the most valuable testament to our company.

This is really a good question. Its obvious that progress is not just a one-day game, its all about the consistent efforts and strategic methods of getting your organization from where it started and where it is now.

Exactly, the same way, we started our company ten years back, with each move progressing just by putting in a lot of hard work and focusing more on customer satisfaction. Since the start, we have purely followed a customer-centric approach, and are ready to go out of the box to make them satisfied.

This is the ethos of why we stand tall with little over 40k+ happy clients on a global scale. Obviously, yes and the number grows daily. In short, this approach has helped to galvanize the progress of our company.

Advice to the future leaders who want to make it big in the market.

Id say, The web hosting world is strongly competitive and you need to go the extra mile to make it big. Heres my advice to future diligent leaders, A leader par excellence leader does not follow a single path.

For this, they need to have the guts to take risks beyond their comfort zone and of course, should have the perfect strike between seasoned expertise and the will to thrive.

Most importantly, a leader needs to have the ability to handle the market challenges, ups and downs and ultimately, understand the customers requirements and keep them satisfied.

Well, and these are the great mantras I believe.

Thank you. Its my pleasure and a super great moment to be a part of this interview!

Visit link:

MilesWeb: the future ahead is aligned towards massive growth - HostReview.com

Categories
Cloud Hosting

Cerebras Wants Its Piece Of An Increasingly Heterogenous HPC World – The Next Platform

Changing the compute paradigm in the datacenter, or even extending it or augmenting it in some fashion, is no easy task. A company, even one that has raised $720 million in seven rounds of funding in the past six years, has to be careful to not try to do too much too fast and lose focus while at the same time adapting to the conditions in the field to get its machines doing real work and proving their worth on tough tasks.

This is where machine learning upstart and wafer-scale computing pioneer Cerebras Systems finds itself today, and it does not have the benefit of ubiquity that the Intel X86 architecture or the relative ubiquity that the Nvidia GPU architecture have had as they challenged the incumbents in datacenter compute in the 1990s and the 2010s, respectively.

If you wanted to write software to do distributed computing on these architectures, you could start with a laptop and then scale the code across progressively larger clusters of machines. But the AI engines created by Cerebras and its remaining rivals, SambaNova Systems and Graphcore and possibly Intels Habana line, are large and expensive machines. Luckily, we live in a world that has become accustomed to cloud computing, and now it is perfectly acceptable to do timesharing on such machines to test ideas out.

This is precisely what Cerebras is doing as it stands up a 13.5 million core AI supercomputer nicknamed Andromeda in a colocation facility run by Colovore in Santa Clara, the very heart of Silicon Valley.

This machine, which would cost just under $30 million if you had to buy it, is being rented by dozens of customers who are paying to use it to train on a per-model basis with cloud-like pricing, Andrew Feldman, one of the companys co-founders and its chief executive officer, tells The Next Platform. There are a bunch of academics who have access to the cluster as well. The capacity on Andromeda is not as cheap and easy as running CUDA on an Nvidia GPU embedded on a laptop in 2008, but it is about as close as you can get with a wafer-scale processor that would not fit inside of a normal server chassis, much less a laptop.

This is similar to the approach that rival SambaNova Systems has taken, but as we explained when talking to the companys founders back in July, a lot of customers are going even further and are tapping SambaNovas expertise in training foundation models for specific use cases as well as renting capacity on its machines to do their training runs.

This approach, which we think all of the remaining important AI training hardware vendors will need to take that would be Cerebras, SambaNova, Graphcore, and, if you want to be generous, Intels Habana Labs division (if it doesnt shut it down as part of its looming cost cuts) is not so much a cloud or hosting consumption model as it is the approach IBM took in the 1960s at the dawn of the mainframe era with its System/360s. Back then, you bought a machine and you got white glove service and programming included with the very expensive services because so few people understood how to program applications and create databases underneath them.

Andromeda is, we think, a first step in this direction for Cerebras, whose customers are very large enterprises and HPC centers who already have plenty of AI expertise. But the next and larger round of customers the ones that will constitute a real revenue stream and possibly profits for Cerebras and its AI hardware peers are going to want access not just to flops, but deep expertise so models can be created and trained for very specific workloads as quickly as possible.

Here are the basic feeds and speeds of the Andromeda system:

Each of the CS-2 nodes in the Andromeda cluster has four 64-core AMD Epyc processors in it that do housekeeping tasks for each of the WSE-2 wafers, which have 2.6 trillion transistors implementing 850,000 cores and their associated 40 GB of SRAM. That embedded SRAM memory on the die has 20 PB/sec of aggregate bandwidth, and the fabric implemented between the cores on the wafer has an aggregate bandwidth of 220 Pb/sec. Cerebras calls this mesh fabric linking the cores SwarmX, and a year ago this interconnect was extended over a dozen 100 Gb/sec Ethernet transports to allow the linking of up to 192 CS-2 systems into a single system image. Across those 16 CS-2 machines, the interconnect fabric has 96.8 Tb/sec of aggregate bandwidth.

Just like you can plug FPGAs together with high speed interconnects and run a circuit simulation as a single logical unit because of the high speed SerDes that wrap around the FPGA pool of configurable logic gates, the Cerebras architecture uses the extended SwarmX interconnect to link the AI engines together so they can train very large models across up to 163 million cores. Feldman says that Cerebras has yet to build such a system and that this scale has been validated thus far only in its simulators.

That SwarmX fabric has also been extended out to what is essentially a memory area network, called MemoryX, that stores model parameter weights and broadcast them to one or more CS-2 systems. The SwarmX fabric also reduces gradients from the CS-2 machines as they do their training runs. So the raw data from training sets and the model weights that drive the training are disaggregated. In prior GPU architectures, the training data and model weights have been in GPU memory, but with fast interconnects between CPUs and GPUs and the fatter memory of the CPU, data is being pushed out to the host nodes. Cerebras is just aggregating parameter weights in a special network-attached memory server. The SwarmX fabric has enough bandwidth and low enough latency mainly because it is actually not running the Ethernet protocol, but a very low latency proprietary protocol to quickly stream weights into each CS-2 machine.

By contrast, the 1.69 exaflops Frontier supercomputer at Oak Ridge National Laboratories has 8.73 million CPU cores and GPU streaming multiprocessors (the GPU equivalent to a core), and 8.14 million of those are the GPU SMs that comprise 97.7 percent of the floating point capacity. At the same FP16 precision that is the high end for the Cerebras precision, Frontier would weigh in at 6.76 exaflops across those GPU cores. AMD does not yet have sparse matrix support for its GPUs, but we strongly suspect that will double the performance as is the case with Nvidia Ampere A100 and Hopper H100 GPU accelerators when the Instinct MI300 GPU accelerators which we will start codenaming Provolone as the companion to Genoa CPUs if AMD doesnt give us a nickname soon ship next year.

In any event, as Frontier is with its Instinct MI250X GPUs, you get 6.76 exaflops of aggregate peak FP16 for $600 million, which works out to $88.75 per teraflops for either dense or sparse matrices. (We are not including the electric bill for power and cooling, the cost of storage just the core system.)

Thats a lot of flops compared to the 16-node Andromeda machine, which only drives 120 petaflops at FP16 precision with dense matrices but very much importantly delivers close to 1 exaflops with the kind of sparse matrix data that is common with the foundational large language models that are driving AI today.

Hold on. Why is Cerebras getting an 8X boost for its sparsity support when Nvidia is only getting a 2X boost? We dont know yet, but we just noticed that and are trying to find out.

The WSE-2 compute engine only supports quarter precision FP16 and half precision FP32 math, plus a proprietary format called CB16 floating point format that has 6 bit exponents; regular IEEE FP16 has 5 bit exponents, and the BF16 format from Googles Brain division has 8 bit exponents which makes it easier to convert to FP32 formats. So there is no extra boost coming from further reduced precision down to, say, FP8, FP4, or FP2. As far as we know.

At $30 million, the 16-node Andromeda cluster costs $250 flat per teraflops for dense matrices, but only $31.25 per teraflops with sparse matrices. It only burns 500 kilowatts, compared to the 21.9 megawatts of Frontier, too.

But here is the real cost savings: A whole lot less grief. Because GPUs are relatively small devices at least compared with an entire wafer with 850,000 cores running large machine learning models means chopping up datasets and using a mix of model parallelism (running different layers of the model on different GPUs that have to communicate over the interconnect) and data parallelism (running different portions of the training sets on each device and doing all of the work of the model on each device individually). Because the WSE-2 chip has so many cores and so much memory, the training set can fit in the SRAM and Cerebras only has to do data parallelism and only calculates one set of gradients on that dataset rather than having to average them across tens of thousands of GPUs. This makes it much easier to train an AI model, and because of the SwarmX interconnect, the model can scale nearly linearly with training data and parameter count, and because the weights are propagated using the dedicated MemoryX memory server, getting weights to all of the machines is also not a problem.

Today, we can support 9 trillion parameter models on one CS-2, says Feldman. It takes a long time, but the compiler can work through them and it can place work and we can store it using MemoryX and SwarmX. We dont do model parallelism because our wafer is so big that we dont have to. We extract all of the parallelism by being strictly data parallel, and that is the beauty of this.

To be honest, one of us (Tim) did not fully appreciate the initial architecture choice Cerebras made and the changes announced to it last year, while the one other of us (Nicole) did. Thats why we are each others co-processor. . . .

To be very clear, Cerebras is not doing model parallelism across those 16 CS-2 nodes in any fashion. Youi chop the dataset into the same number of pieces as the nodes you have. The SwarmX and MemoryX work together to accumulate the weights of the model for the 16 nodes, each with their piece of the training data, but the whole model runs entirely on that subset of data within one machine and then the SwarmX network averages the gradients and stores the final weights on the MemoryX device. Like this:

The scaling that the Andromeda machine it was very hard to not say Strain there is seeing is damned near linear across a wide variety of GPT models from OpenAI:

With each increase in scale, the time it takes to train a model is proportionately reduced, and this is important because training times on models with tens of billions of parameters are still on the order of days to months. If you can chip that by a factor of 16X, it might be worth it particularly if you have a business that requires retraining often.

Here is the thing. The sequence lengths a gauge of the resolution of the data keep getting longer and longer to provide more context for the machine learning training. AI inference might have a sequence length of 128 or 256 or 384, but rarely 1,024 but training sequence lengths can be much higher. In the table above, the 1.3 billion GPT-3 and 25 billion GPT-J runs had 10,240 sequence lengths, and the current CS-2 architecture can support close to 50,000 sequence lengths. When Argonne National Laboratory pit a cluster of a dozen CS-2s against the 2,000-GPU Polaris cluster, which is based on Nvidia Ampere A100 GPUs and AMD Epyc 7003 CPUs, Polaris could not even run the 2.5 billion and 25 billion GPT-3 models at the 10,240 sequence level. And on some tests, where a 600 GPU partition of Polaris was pit against the Dozen machines, it took more than a week for the Polaris system to converge when using a large language model to predict the behavior of the coronavirus genome, but the Cerebras clusters AI training converged in less than a day, according to Feldman.

The grief of using Andromeda is also lower in another way: It costs less than using GPUs in the cloud.

Just because Andromeda costs around $30 million to buy doesnt mean that a timeslice of the machine is proportional to its cost, any more than the price that Amazon Web Services pays for a server directly reflects the cost of an EC2 instance sold from the cloud. GPU capacity costs are all over the map on the clouds, on the order of $4 to $6 an hour per GPU on the big clouds, and for an equivalent amount of training for GPT-3 models Feldman says that the Andromeda setup could cost half of that of GPUs and sometimes a lot less, depending on the situation.

At least for now, Cerebras is seeing a lot of action as an AI accelerator for established HPC systems, often machines accelerated by GPUs and doing a mix of simulation and modeling as well as AI training and inference. And Feldman thinks it is absolutely normal that organizations of all kinds and sizes will be using a mix of machinery a workflow of machinery, in fact instead of trying to do everything on one architecture.

It is interesting to me that this sounds like a big idea, says Feldman with a laugh. We build a bunch of different cars to do different jobs. You have a minivan to go to Grandmas house and soccer practice, but it is terrible for carrying 2x4s and 50 pound bags of concrete and a truck isnt. And you want a different machine to have fun, or to haul logs, or whatever. But the idea that we can have one machine and drive its utilization up to near 100 percent is out the window. And what we will have are computational pipelines, a kind of factory view of big compute.

And that also means, by the way, that keeping a collection of machines busy all the time and getting the maximum value out of the investment is probably also out the window. We will be lucky, says Feldman, if this collection of machinery gets anywhere between 30 percent and 40 percent utilization. But this will be the only way to get all kinds of work done in a timely fashion.

Follow this link:

Cerebras Wants Its Piece Of An Increasingly Heterogenous HPC World - The Next Platform

Categories
Cloud Hosting

Melita Business showcasing Cloud and Fibre connectivity at SiGMA – Times of Malta

Melita Business will be showcasing its Fibre connectivity and Cloud services, along with colocation and end-to-end network infrastructure management at the SiGMA Summit, which is the leading forum for the iGaming Industry.

Melita Business experts will be on-hand to talk about disaster recovery, backup and cloud hosting solutions, all complemented by high speed fibre connectivity with fully redundant international links connecting to the worlds leading internet carriers in Milan.

The iGaming industry accounts for around 10 per cent of Maltas GDP, making it an important player in the economy.

Malcolm Briffa, Director of Business Services at Melita explained: Alongside our hosting and cloud services, the Melita Business team will also be sharing best practices, solutions, and professional advice to iGaming companies that require fast and reliable connectivity through dedicated fibre internet, or international private links, available for companies located across Malta and Gozo.

Malta has long established itself as a leading remote gaming jurisdiction with an efficient licensing process and a swift regulatory system. Thanks to its adaptive responsiveness to the iGaming industry, the country now boasts the largest number of licensed operators in the world. The Melita Business team will be displaying the companys future-proof technology and hard-earned expertise on Stand C01. Dedicated consultation sessions can reserved at sales@melitabusiness.com.

Independent journalism costs money. Support Times of Malta for the price of a coffee.

See the article here:

Melita Business showcasing Cloud and Fibre connectivity at SiGMA - Times of Malta

Categories
Cloud Hosting

Built to Linux Cloud VPS Server Hosting With SSD and Control Panel Via Onlive Server – Digital Journal

Onlive Server offers the Cloud VPS Server that can offer you the perfect balance of power and flexibility, and USA Server Hosting has some of the best options in the business. In this blog post, well look at some of the reasons this server could be the right choice for your website and some of the top features our servers can offer.

A VPS, or Virtual Private Server, is a server that runs in a cloud data center and can be used just like a physical server would be. The benefits of using virtual servers include flexibility, scalability, cost-efficiency, and reliability. A Cloud USA VPS Server is less expensive than a physical one because you only need to pay for the resources that you use.

Scalability: USA VPS is quickly scaled up or down as needed, so you only pay for the resources you use.

Flexibility: With this kind of server, youre not tied to any one physical location you can easily move your server to another data center if needed.

Reliability: These servers are designed to be highly available and can tolerate failures of individual components without affecting your website.

Benefits of Using a Cloud VPS?

Using cloud USA VPS Hosting to boost your website performance has many benefits. Perhaps the most obvious benefit is that you can scale your website without worrying about capacity issues. It also offers great flexibility since you can easily add or remove resources as your website demands change. Finally, it can be a great cost-saving measure since you only pay for the resources you use.

More Scalable

The cloud is more scalable than traditional VPS hosting. With USA VPS Server, you are limited by the size of the server you are using. With the best and most affordable cloud server hosting, you can easily scale up or down as needed.

Provides Top-Class Security

USA VPS provides top-class security to its customers with the help of the latest technologies and a team of highly skilled security professionals.

Media ContactOnlive Server Private Limited+91 6387659722[emailprotected]

Follow the full story here: https://przen.com/pr/33484979

See original here:

Built to Linux Cloud VPS Server Hosting With SSD and Control Panel Via Onlive Server - Digital Journal

Categories
Cloud Hosting

Week in Insights: More Is Merrier at Thanksgiving – Bloomberg Tax

In a few days, my house is going to be loud. I am totally looking forward to it.

It wont start as a roar but more of a gradual turning up of the volume. My daughters will arrive home from college at the beginning of the week. Shortly after, my brother and his family will drive in from Connecticut, while my parents will take a little longer to arrive from North Carolina.

By the time Thanksgiving arrives, Ill be hosting 26 people for dinner. Yes, 26.

Cheers to this great Thanksgiving dinner!

Stock photo via Getty Images

Its not all family. We also welcome international students to our home over the holidays. My law school alma mater (shout out to Temple U!) coordinates a Thanksgiving dinner program every year. Recognizing that its difficult for international students to travel home over the breakand that the dorms will likely be emptythey match as many students with local host families as they have spots available.

For the past several years, weve hosted two or three students. Sometimes, they come back in subsequent years; we made great friends with a family who returned and brought their young son. But this year, were hosting sevenyoure allowed to cap the number of students you can comfortably host, and we didnt submit a limit.

When I saw the list, I was a little overwhelmed. But then I realized that I couldnt say no. We have the space. We have the resources to provide dinner. We have a great family who loves to meet new people. And I thought about my girls in college and what it would be like if they were alone during the holidays. So, I called our local party rental company, ordered an additional table and some chairs, and started reworking my menu. I dont regret it for a minute.

Over the years, some of our best conversations have been across those dinner tables. I love hearing about what its like to grow up in other cultures and what other countries may think about the US and its various government systemsof course, I ask about tax whenever I can. I never walk away without learning something new.

I think we always learn the most from each other, whether that happens over a dinner table, in an office, or on the internet. I hope youll do that in your own life, too, whether its taking the time to talk with a colleague, pen an article for us, connecting on social mediaor even inviting students into your home. (Check to see if theres a program in your area.)

At Bloomberg Tax, we aim to make it easy for you to share and receive information. Our experts offer great commentary and insightful analysis on federal, state, and international tax issues to keep you in the knowand thats definitely something to be thankful for!

The Exchange Its where great ideas intersect.

Kelly Phillips Erb

True or false: If your employer gives you a Thanksgiving turkey or gift card to buy a turkey, its excludable from taxable income.Answer at the bottom.

How much should your firm or practice get involved in political issues, if at all? Should your business contribute to an elected official, political candidate, or a political cause or take a political candidate or official as a client? If you do, what questions could you face from your stakeholders?

Find the answers to that question and more by joining Bloomberg Tax and Bloomberg Law Insights & Commentary teams on Wednesday, Nov. 30, from noon to 1 p.m. ET for Should Your Company Take a Stand on Political and Social Issues? part of our free virtual Lunch & Learn series. Two attorneys from Skadden, Arps, Slate, Meagher & Flom LLP will lead a discussion about political and social issues in the workplace,

You can join us for this event, no registration required, by signing on here at noon ET on Nov. 30 or by calling +1-929-205-6099 US and entering the meeting ID: 975 6437 0979.

Sgt. Brian Ellis of Mt. Airy, Md. (L) shares a laugh with Sgt. Cesar Romero of Brentwood, N.Y. during a Thanksgiving lunch, Nov. 27, 2003, at field base St. Mere near Fallujah, Iraq.

When the pricing and financial performance of a sports franchise are misaligned, external financing may not be advantageous or practical. But understanding the tax impact of the terms of a deal is simply adhering to the strategy that the best offense is a good defense, say RSM US LLPs Amanda Hodgson, Jamie Sanders, and Justin Krieger.

President Joe Biden has nominated Daniel Werfel as its next commissioner. With the right leadership and oversight, the agency can deliver the 21st century service that all taxpayers have the right to expect while taking a big bite out of the $600 billion-plus annual tax gap, say former IRS commissioners Fred Goldberg and Charles O. Rossotti.

Cloud-based software-as-a-service business models are enabling rapid growth, and the accounting industry needs to adapt. Stouts Steve Sahara, Jeremy Krasner, Brad Burch, Kevin Pierce, and Joe Randolph share some important aspects of SaaS revenue recognition.

The IRS has been using John Doe summonses in its digital asset enforcement since 2016, when it first attempted to identify crypto exchange customers. Taxpayers usually lost in court objecting to the summonses, but Ropes & Gray attorneys discuss a novel strategy in a federal appeals court.

The IRS new compliance program gives any retirement plan sponsor targeted for examination a 90-day review period to determine if they satisfy all tax law requirements. Best Best & Krieger LLPs Susan Neethling and Helen Byrens detail the pilot program and share some next steps for plan sponsors.

Incentive stock options provide executives with various tax benefits, but how do you know when to exercise and sell the underlying ISO stock? Alyssa Rausch of EisnerAmper summarizes the basic tax rules and common tax strategies.

Foreign-incorporated cruise lines based in US ports have often been exempted from US income tax due to Section 883. In a two-part article, Gunster, Yoakley & Stewart, P.A.'s Alan S. Lederman discusses whether US corporate alternative minimum tax could apply to these cruise lines or whether the proposed OECD Pillar Two minimum tax will impose a minimum tax regime on such cruise lines.

Christos Theophilou of Taxand explains that multinational enterprises need to have adequate preparation in place to satisfy the scrutiny of intra-group services by tax authorities, and he provides a practical example and case study to illustrate the issues.

As discussion of how to tackle the global challenge of climate change continues at COP27, Chris Morgan of KPMG considers current national approaches, including tax measures, and suggests more flexibility for countries to decide what approach is most suited to their individual needs.

Rob Janering of Crowe looks at the value-added tax compliance considerations for UK organizations supplying digital events and discusses the impact of the upcoming changes in the EUs position on supplies of live online services.

A profit-per-employee tax could go a long way to support the American workforce and to ensure that Big Tech factors the human cost of business into their plans as much as profits, says writer Hassan Tyler.

At The Exchange, we welcome responses from our readers and encourage diversity and civil discussion. We are especially interested in responses that add to the conversation, or introduce a different point of view. If you have a response to one of our published Insights, wed love to hear from you.

Nonfungible tokens hold intrinsic value due to their digital properties and traits. In this edition of A Closer Look, Stouts Fotis Konstantinidis looks at the challenges of valuing NFTs, as well as the methods and data used for valuation.

Journalists make a Thanksgiving toast at the UN Club Nov. 22, 2001, in Islamabad, Pakistan.

Photographer: Paula Bronstein via Getty Images

Whats on Bloomberg Tax Insights wish list right now? For December, were hoping to end the year on a high note with a wrap-up of the year and a peek ahead. Were interested in: What did you think was newsworthy in tax in 2022? What should we look out for in 2023? What should tax professionals do now to prepare for next tax season? Were looking for a thoughtful take that will get tax professionals talking about next year...even before the calendar flips over.

Our Insights articlesabout 1,000 wordsare written by tax professionals offering expert analysis on current tax practice and policy issues, tax trends and topics, and tax and accounting firm practice and management. If you have an interesting, never-published article for publication, wed love to hear about it. You can contact our Insights team at TaxInsights@bloombergindustry.com.

Private equity firm Cinvens recent $720 million acquisition of tax preparation service TaxAct is the latest example of the changing landscape around mergers and acquisitions. But filing your taxes should be free and not involved with entities motivated by profitfull stop, says columnist Andrew Leahey.

Despite numerous controversies, the 2022 World Cup in Qatar is expected to generate billions of dollars. I take a look at where the money comes from, where it goes, and how it might be taxed.

As boiler room schemes gather steam across the globe, its important to remain alert. In my column, I explain some tips to consider during International Fraud Awareness Week.

Its been a busy week in tax news from state capitals to Washington. Here are some stories you might have missed from our Bloomberg Tax news team.*Note: Your Bloomberg Tax login will be required to access Tax News.

Carlos Martinez has joined White & Case as a partner in the tax practice in Mexico City, the firm said.

Martn Guzmn has been appointed to the role of commissioner by the Independent Commission for the Reform of International Corporate Taxation, according to a news release.

RKL LLP has added Thomas Romano and Jennifer Witmer as managers in the tax services group in Pennsylvania, the advisory firm said.

Leila Vaughan has rejoined Faegre Drinker as a partner in the investment management practice group in Philadelphia, the firm said.

If you are changing jobs or being promoted, let us know. You can email your submission to TaxMoves@bloombergindustry.com for consideration.

Our Spotlight series highlights the careers and lives of tax professionals across the globe. This weeks Spotlight is on Caroline Cao, a partner at Lewis Brisbois Bisgaard & Smith, LLP in Sacramento, Calif.

A Chicago Police officer carries a tray of pre-Thanksgiving meals to waiting guests Nov. 21, 2002 at the Columbus Park Refectory in Chicago.

Photographer: Tim Boyle via Getty Images

False. If your employer gives you a turkey, ham, or other item of nominal value at Thanksgiving, Christmas or other holidays, that is excludable from income. However, if your employer gives you cash, gift card, or a similar item that you can easily exchange for casheven if you promise to buy a turkeythats taxable regardless of the amount.

We talk about tax a lot. But theres a lot more that you might hear us talking about if you popped into one of our Teams meetings. Heres a quick look at what some of us are watching, reading, and listening to this week:

Watching:

Reading:

Listening:

Sign up for your free copy of our newsletter delivered to your inbox each week. Just head over to The Exchange and sign up using the green Free Newsletter Signup box at the topor just go directly to the newsletter sign-up page.

Your feedback and suggestions are important to us, so dont hesitate to reach out on social or email me directly at kerb@bloombergindustry.com.

See the original post:

Week in Insights: More Is Merrier at Thanksgiving - Bloomberg Tax

Categories
Cloud Hosting

4 tech investment trends to watch out for in 2023 – GrowthBusiness.co.uk

From a pricing and valuation perspective, its been a difficult year for tech stocks. Even blue-chip names such as Microsoft, Apple and NVIDIA have faced material price corrections on the financial markets. At one point in mid-October, Morningstars US Technology Sector index was down more than a third on the beginning of the year. Even after a slight rally, it remained a quarter lower in early November.

However, technology businesses continue to demonstrate good levels of growth as the world becomes increasingly digitised. Additionally, many tech businesses have resilient characteristics meaning that they remain attractive to investors.

>See also: Five venture capital trends for start-ups to follow in 2022

There are good reasons to be optimistic for the future of tech. For a start, there have been several years of strong growth and not just among the giants. Last year was the best-ever for the UK tech industry in terms of investment, with the sector securing 29.4bn in funding. As the Department for Digital, Culture, Media & Sport, summed up: More VC investment, more unicorns, more jobs and more futurecorns.

Perhaps more importantly, as businesses face up to a challenging environment, theyll rely on technology more than ever to secure the efficiencies and opportunities necessary to survive and even thrive in a tougher climate. This should ensure that certain niches of the technology ecosystem will continue to see growth.

>See also: Why its a good time to invest in UK start-ups if youre a dollar investor

Four tech investment trends in particular stand out.

In an inflationary market, many businesses will be looking to drive efficiencies. This will lead to continued and growing demand for AI, machine learning and the automation which comes with it. Reducing reliance on human resources and boosting efficiency will always be popular as labour and other costs rise, but what we are seeing now is applications going well beyond simply streamlining processes.

For example, Mobysoft, an ECI investment, uses predictive analytics to help social housing providers keep their tenants in well-maintained homes, while improving rent collection and reducing arrears.

AI can also help drive new business. CPOMS, a former ECI investment, provides a good example. It used ALTERYX analytics software to create a new business identification model, prioritising prospective customers by analysing the most common features of its existing user base. Such tools will be valuable for businesses looking to source new revenue.

The cloud also offers a significant opportunity for businesses to both cut costs and develop new capabilities. Public cloud hosting allows businesses to rapidly scale up or down their operations without incurring significant capital expenditure, which can prove useful either for investing additional free cash in growth initiatives or in taking defensive action in more scenarios. The old financial adage of cash is king remains as true today as it ever did.

Moreover, the range of cloud services and tooling offered by the hyperscalers is growing. For example, Microsoft has launched an IoT Hub in its Azure platform, which enables companies to construct customised solutions for complex scenarios to facilitate IoT use cases. This is likely to become even more useful as the range of potential applications of IoT expands with the rollout of the 5G spectrum and the increasing prevalence of low-power IoT networks.

Crucially, public cloud platforms offer businesses the ability to keep tighter control of their fixed costs, both in terms of the technology, the internal IT capability and the floor space which historically may have been used to house on-premise infrastructure. This capability point will be particularly valuable at a time when such skills are expensive and in short supply.

Learn to code was once the default advice for employees hit by redundancy or those who found themselves in a declining industry. More recently, it has become a significantly in-demand skill as SaaS and software has become increasingly prevalent. However, while historically it was imperative for a developer to learn one, if not several, distinct coding languages, the increasing development of low-code platforms should increasingly democratise and make it simpler for non-technical individuals to create products and applications without having to learn a language.

This trend will allow a greater range of businesses to produce software and accelerate products to market by simplifying development. It may also help address shortages in the number of developers given the current war for talent in this space.

Finally, cybersecurity is certain to remain a priority for businesses and individuals, regardless of how the economy performs, particularly given the increasing number of malicious individual or state actors. Notably, there have been massive global increases in the use of ransomware to extract capital from afflicted businesses. There are even guides on how to launch ransomware attacks on the dark web, so this is an increasingly important and sadly frequent issue that businesses have to face.

High-profile attacks tend to make headlines. The most recent being a severe attack on Uber in September 2022, which was started by a hacker who manipulated an employee into sharing their password through a remote access portal on their mobile phone. Via this one small error, the hacker was able to gain access to the companys critical infrastructure. However, it is not just high-profile corporates that are at risk. The threat is ubiquitous with attackers targeting businesses of all sizes and on occasion for relatively small sums of money.

In the face of an increasing frequency of cyber-attacks it is imperative that businesses protect their digital assets, IP and customers data. This creates a beneficial backdrop for cyber security businesses to grow and create value for shareholders.

Businesses should continue to invest in these areas to ensure they are best placed for the future. Technologies and tech services providers in these areas are likely to thrive, with strong prospects for growth and valuations.

Daniel Bailey is investment director at ECI Partners

Who are the UKs next unicorns?

See the rest here:

4 tech investment trends to watch out for in 2023 - GrowthBusiness.co.uk

Categories
Cloud Hosting

Interior Department Seeks Proposals for $1B Cloud Hosting Solutions III Contract – Executive Gov

The Department of the Interior has initiated bid-seeking for a potential 11-year, $1 billion single-award indefinite-delivery/indefinite-delivery contract covering data consolidation and cloud migration services.

A notice posted Monday on SAM.gov states that the Cloud Hosting Solutions III contract will help DOI transition to a single virtual private center that will support requirements for cloud and managed services.

The CHS III contract will support the implementation and maintenance of a virtual private cloud, enforce policies within the VPC environment and provide various managed services optimized to operate in the single hybrid cloud environment.

The selected enterprise cloud services broker will manage a portfolio of cloud computing, storage and application services across multiple vendor offerings, the final request for proposals document states.

The IDIQ has a five-year base period of performance and three two-year option periods.

Responses are due Dec. 19.

Read more from the original source:

Interior Department Seeks Proposals for $1B Cloud Hosting Solutions III Contract - Executive Gov

Categories
Cloud Hosting

Healthcare Cloud Computing Market to Reach USD 157.75 Billion by 2030; Widespread Use of Wearable Technology, Big Data Analytics & IoT in The…

The Brainy Insights

Top companies like Iron Mountain Inc., Athenahealth Inc., Dell Inc. & IBM Corp. have introduced software and services that allow for the collection and assimilation of enormous amounts of healthcare data that are beneficial for the development of the market. North America emerged as the largest market for the global healthcare cloud computing market, with a 34.02% share of the market revenue in 2022.

Newark, Nov. 11, 2022 (GLOBE NEWSWIRE) -- Healthcare cloud computing market size from USD 38.55 billion to USD 157.75 billion in 8 years: The Evolution from Niche Topic to High ROI Application

Upcoming Opportunities

Brainy Insights estimates that the USD 38.55 billion in 2022healthcare cloud computing market will reach USD 157.75 billion by 2030. In just eight years, healthcare cloud computing has moved from an uncertain, standalone niche use case to a fast-growing, high return on investment (ROI) application that delivers user value. These developments indicate the power of the Internet of Things (IoT) and artificial intelligence (AI), and the market is still in its infancy.

Key Insight of the Healthcare Cloud Computing Market

Europe to account for the fastest CAGR of 22.74% during the forecast period

Europe is expected to have the fastest CAGR of 22.74% in the healthcare cloud computing market. Key factors favouring the growth of the healthcare cloud computing market in Europe are that more people are becoming aware of the accessibility of better cloud computing solutions for the healthcare industry. Additionally, the number of hospital admissions is rising due to the growing older population, who are more susceptible to several ailments. The combined effect of all these factors is considered favourable for the demand for the healthcare cloud computing market in Europe.

However, in 2022, the healthcare cloud computing market was dominated by North America. Due to the widespread use of healthcare IT services and ongoing financial and legislative backing from government organisations, the US is a global leader in the healthcare cloud computing business. The Health Information Technology for Economic and Clinical Health Act's (HITECH Act) implementation sped and accelerated the deployment of EHRs and related technologies nationwide. The terms of the Act provide that up to a specific time; healthcare providers will be given financial incentives for showing meaningful use of EHRs. Still, beyond that point, fines may be assessed for failing to justify such usage. Furthermore, in May 2020, Microsoft revealed a cloud service designed exclusively for the healthcare industry to serve doctors and patients better. Healthcare organisations may use this industry-specific cloud service to organise telehealth appointments, manage patient data, and comply with the Health Insurance Portability and Accountability Act (HIPAA).

Story continues

Get more insights from the 230-page market research report @ https://www.thebrainyinsights.com/enquiry/sample-request/13004

The private segment accounted for the largest market share of 39.81% in 2022

In 2022, the private segment held a significant 39.81% market share, dominating the market. It is essential to securely preserve very sensitive patient data to avoid a data privacy breach that might result in legal ramifications. Many reasons have contributed to the growth of the private cloud market, including rising acceptability due to its improved security and an increasing adoption rate compared to public clouds and hybrid clouds.

Healthcare payers account for the fastest CAGR of 23.06% during the forecast period.

Over the forecasted period, the segment of healthcare payers is predicted to increase at the highest CAGR of 23.06%. Insurance companies, organisations that sponsor health plans (such as unions and employers), and third parties make up the healthcare payers. Payers are quickly using cloud computing solutions for safe data collecting and storage, resolving insurance claims, evaluating risks, and preventing fraud. Payers have traditionally struggled to manage high-risk patient groups and high usage. Payers are implementing these cutting-edge technical methods and remedies to reduce escalating healthcare costs. Additionally, cloud computing supports payers in corporate growth, service enhancement, quality improvement, and cost reduction.

Advancement in market

To increase patient engagement, foster teamwork among healthcare professionals, and enhance clinical and operational insights, Microsoft introduced its Microsoft Cloud for Healthcare suite in November 2020.

The collaboration between CVS Health and Microsoft to advance digital healthcare using AI and cloud computing began in December 2021.

To support research and innovation, the life sciences software business, MetaCell, unveiled a new product in September 2021 called "MetaCell Cloud Hosting," which offers cutting-edge cloud computing solutions mainly created for life science and healthcare enterprises.

Market Dynamics

Driver: Advancements in the technology

Because of recent technological developments and enhanced security, many healthcare institutions use the cloud's advantages more than ever. Due to technological developments like telemedicine, remote monitoring, and natural language processing APIs, cloud technology will continue to grow in the upcoming years to better suit specific digital health settings in several important ways. Several healthcare organisations aspire to develop these cloud computing solutions by combining cutting-edge technologies. Instead of gathering data and sending it to the cloud, the system analyses and processes the data right where it is being collected. Due to the adoption of suitable regulatory measures and the development of high-speed internet is also anticipated that the global healthcare cloud computing market will grow. The increasing availability of high-speed internet worldwide is one of the significant factors fuelling the growth of the healthcare cloud computing market.

Restraint: Technological concerns related to the data

The global market for healthcare cloud computing is being restricted by issues about data privacy, obstacles to data transfer, and a rise in cloud data breaches. The lack of skilled IT workers has also restrained the adoption of this technology. Competent professionals are in great demand because of the challenge of finding expertise in HIPAA. The skill gap is expected to slow the shift to cloud computing platforms.

Opportunity: Increasing adoption of data analytics in the healthcare sector

The increasing adoption of wearable technologies, big data analytics, and the Internet of Things (IoT) in the healthcare sector, as well as the introduction of new payment methods and the affordability of the cloud, will boost the market growth. The market is also being affected by the increase in technology usage due to its many advantages, such as the flexibility, enhanced data storage, and scalability offered by cloud computing, as well as the accessibility of flexible medical benefit plan designs.

Challenge: Stringent regulations regarding data

Industry growth is anticipated to be hampered by the complex regulations governing cloud data centres, data security and privacy issues, among other factors. The market for healthcare cloud computing is anticipated to experience challenges throughout the forecast period due to provider rental rules, worries about interoperability and portability, and growing internet dependency among users.

Custom Requirements can be requested for this report @ https://www.thebrainyinsights.com/enquiry/request-customization/13004

Some of the major players operating in the healthcare cloud computing market are:

Allscripts Healthcare Solution Inc. Iron Mountain Inc. Athenahealth Inc. Dell Inc. IBM Corp. Oracle Corp. Cisco Systems Inc. Qualcomm Inc. VMware Inc. Microsoft Corp. EMC Corp. GNAX Health

Key Segments cover in the market:

By Cloud Deployment:

Private Public Hybrid

By End User:

Healthcare Payers Healthcare Providers

By Region

North America (U.S., Canada, Mexico) Europe (Germany, France, U.K., Italy, Spain, Rest of Europe) Asia-Pacific (China, Japan, India, Rest of APAC) South America (Brazil and the Rest of South America) The Middle East and Africa (UAE, South Africa, Rest of MEA)

Have a question? Speak to Research Analyst @ https://www.thebrainyinsights.com/enquiry/speak-to-analyst/13004

About the report:

The market is analyzed based on value (USD Billion). All the segments have been analyzed worldwide, regional, and country basis. The study includes the analysis of more than 30 countries for each part. The report analyzes driving factors, opportunities, restraints, and challenges for gaining critical insight into the market. The study includes porter's five forces model, attractiveness analysis, Product analysis, supply, and demand analysis, competitor position grid analysis, distribution, and marketing channels analysis.

About The Brainy Insights:

The Brainy Insights is a market research company, aimed at providing actionable insights through data analytics to companies to improve their business acumen. We have a robust forecasting and estimation model to meet the clients' objectives of high-quality output within a short span of time. We provide both customized (clients' specific) and syndicate reports. Our repository of syndicate reports is diverse across all the categories and sub-categories across domains. Our customized solutions are tailored to meet the clients' requirement whether they are looking to expand or planning to launch a new product in the global market.

Contact Us

Avinash DHead of Business DevelopmentPhone: +1-315-215-1633Email: sales@thebrainyinsights.comWeb: http://www.thebrainyinsights.com

Excerpt from:

Healthcare Cloud Computing Market to Reach USD 157.75 Billion by 2030; Widespread Use of Wearable Technology, Big Data Analytics & IoT in The...

Categories
Cloud Hosting

Akamai invests in Macrometa as the two strike partnership – TechCrunch

Edge computing cloud and global data network Macrometa has struck a new partnership and product integrations with Akamai Technologies. Akamai also led a new funding round in Macrometa that included participation from Shasta Ventures and 60 Degree Capital. Akamai Labs CTO Andy Champagne will join Macrometas board.

Macrometa founder and CEO Chetan Venkatesh told TechCrunch that its GDN enables cloud developers to run backend services closer to mobile phones, browsers, smart appliances, connected cars and users in edge regions, or points of presence (PoP). That reduces outages because if one edge region goes down, another one can take over instantly. Akamais edge network, meanwhile, covers 4,200 regions around the world.

The partnership between Macrometa and Akamai means the two are combining three infrastructure pieces into one platform for cloud developers: Akamais edge network, cloud hosting service Linode (which Akamai bought earlier this year) and Macrometas Global Data Network (GDN) and edge cloud. Akamai Edge Workers tech is now available through Macrometas GDN console, API and SDK, so developers can build a cloud app or API in Macrometa, and then quickly deploy it to Akamais edge locations.

Venkatesh gave some examples of how clients can use the integration between Macrometa and Akamai.

For SaaS customers, the integration means they can see speed increases and latency improvements of between 25x to 100x for their products, resulting in less user churn and better conversion rates for freemium models. Enterprise customers using the joint solution can improve the performance of streaming data pipelines and real-time data analytics. They also can deal with data residency and sovereignty issues by vaulting and tokenizing data in geo-fenced data vaults for compliance.

Video streaming clients, meanwhile, can use the integration to move their platforms to the edge, including authentication, content catalog rendering, personalization and content recommendations. Likewise, gaming companies can move servers closer to players and use the Akamai-Macrometa integration for features like player matching, leaderboards, multiplayer game lobbies and anti-cheating features. For e-commerce players competing against Amazon, the joint solution can be used to connect and stream data from local stores and fulfillment centers, enabling faster delivery times.

Macrometa will use the funding for developer education, community development, enterprise event marketing and joint customer sales with Akamai (Macrometas products are now available through Akamais sales team).

In a statement about the funding and partnership, Akamai EVP and CTO Robert Blumofe said, Developers are fundamentally changing the way they build, deploy and run enterprise applications. Velocity and scale are more important than ever, while flexibility in where to place workloads is now paramount. By partnering with and investing in Macrometa, Akamai is helping to form and foster a single platform that meets evolving needs of developers and the apps theyre creating.

Edit: Inaccurate funding figure removed.

The rest is here:

Akamai invests in Macrometa as the two strike partnership - TechCrunch

Categories
Cloud Hosting

API series – Section: The why & how of distributing GraphQL – ComputerWeekly.com

This is a contributed piece for the Computer Weekly Developer Network written by Daniel Bartholomew, CTO at Section.

Section is known for hosting and delivery of cloud-native workloads that are highly distributed and continuously optimised across a secure and reliable global infrastructure. Bartholomew is a regular speaker at industry events and experienced technologist in agile and containerised development.

His current role is to envision the technology organisations need to simplify and automate global delivery of cloud-native workloads.

Bartholomew writes as follows

Sources such as Cloudflare note that API calls are the fastest-growing type of Internet traffic and GraphQL APIs are rapidly becoming a de-facto way that companies interact with data. While REST APIs still dominate, GraphQL has a significant advantage: it prioritises giving clients exactly the data they request and nothing more.

As part of that, it can combine results from multiple sources including databases and APIs into a single response.

In short, its more efficient. So that can significantly impact bandwidth usage and application responsiveness and thereby both cost and performance.

However, the nature of the GraphQL structure means that caching responses for improved performance can be a significant challenge, so the secret to make GraphQL more efficient is distributing those GraphQL API servers so they operate (only and always) closer to end users, where and when needed.

Distributing application workloads is a go-to strategy to improve performance, reliability, security and a host of other factors.

When looking at API servers in particular, distribution results in high performance and reliability for the end user, lower costs for backend hosting, lower impact on backend servers, better ability to handle spikes, better security, cloud independence and (if done correctly) no impact on your development and management processes.

This last point is key, as deploying multi-cloud API services has historically been a largely manual process. But before we get to the how, lets dig a bit deeper into why you would want to distribute GraphQL servers.

The performance angle is straightforward: by reducing last-mile distance, latency and responsiveness are considerably improved. Users will experience this directly as a performance boost. In managing the network, you can control how broadly GraphQL servers are distributed, thereby balancing and tailoring performance and cost.

The cost factor is impacted by, among other things, data egress. API servers specifically and microservice architectures in general, are designed to be very chatty.

When using a hyperscaler for cloud hosting, those data egress costs quickly add up. While theres a lot that can be done to optimise and right-size the capacity and resource requirements, its incredibly difficult to optimise egress cost. Distributing GraphQL servers outside the hyperscaler environment (and potentially adding distributed caching with the solution) can minimise these traffic costs.

There are several aspects to decreasing the impact on backend services and the way in which the development teams operate.

Some are inherent to GraphQL: for instance, versioning is no longer an issue.

Without GraphQL, you have to be careful about versioning and updating APIs. With GraphQL as a proxy, you have flexibility. The GraphQL endpoint can remain the same even if the backend changes. Frontend and backend teams thus become more loosely connected, meaning they can operate at different paces, without blocking, so business moves faster. A given frontend can also have a single endpoint dedicated to it, called Backend For Frontend (BFF), which further improves efficiency.

If caching is employed along with distribution, the impact of traffic on backend services demand is decreased as API results themselves can be captured and stored for reuse. Distributed API caching, done well, greatly erodes the need for distributing the database itself and again cuts down on cost.

However, there are challenges with GraphQL when trying to connect data across a distributed architecture, particularly with caching.

With GraphQL, since you are using just one HTTP request, you need a structure to say, I need this information, hence you need to send a body. However, you dont typically send bodies from the client to the server with GET requests, but rather with POST requests, which are historically the only ones used for authentication. This means you cant analyse the bodies using a caching solution, such as Varnish Cache, because typically these reverse proxies cannot analyse POST bodies.

This problem has led to comments like GraphQL breaks caching or GraphQL is not cacheable.

While it is more nuanced than this, GraphQL presents three main caching issues:

CDNs are unable to solve this natively without altering their architecture. Some CDNs have created a workaround of changing POST requests to GET requests, which populates the entire URL path with the POST body of the GraphQL request, which then gets normalised. However, this insufficient solution means you can only cache full responses.

Bartholomew: Knows his API nuances and nuisances.

For the best performance, we want to be able to only cache certain aspects of the response and then stitch them together. Furthermore, terminating SSL and unwrapping the body to normalise it can also introduce security vulnerabilities and operational overhead.

GraphQL becomes more performant by using distribution to store and serve requests closer to the end user. It is also the only way to minimise the number of API requests.

This way, you can deliver a cached result much more quickly than doing a full roundtrip to the origin. You also save on server load as the query doesnt actually hit your API. If your application doesnt have a great deal of frequently-changing or private data, it may not be necessary to utilise edge caching, but for applications with high volumes of public data that are constantly updating, such as publishing or media, its essential.

While there are multiple benefits to distributing GraphQL servers, getting there is typically not easy as it requires a team to take on the burden of managing a distributed network. Issues like load balancing/shedding, DNS, TLS, BGP/IP address management, DDoS protection, observability and other networking and security requirements become front and center. At a more basic level, how do you manually manage, orchestrate and optimise potentially hundreds of GraphQL servers?

These are the types of issues that have led to the rise of distributed hosting providers. The best of these use automation to take on the burden of orchestration and optimisation, allowing organisations to focus on application development and not API delivery. That said, there are specific considerations when it comes to GraphQL.

First, it will be necessary to host GraphQL containers themselves, not just API functionalities, thus eliminating Function as a Service (FaaS) as a distribution strategy. Moreover, it will be necessary to run other containers alongside the GraphQL server to handle caching, security, etc.

Ideally, you also want to ensure scalability through unlimited concurrency, enabling the distributed GraphQL servers to support a large number of concurrent connections exceeding the source database connection limit.

In the end, whether you roll your own solution, or use one of the cloud-native hosting providers, distributing GraphQL API servers and other compute resources will significantly improve both the user experience and the overall cost and robustness of application services. In short, it makes all the sense in the world for developers.

Follow this link:

API series - Section: The why & how of distributing GraphQL - ComputerWeekly.com