Categories
Cloud Hosting

Cerebras Wants Its Piece Of An Increasingly Heterogenous HPC World – The Next Platform

Changing the compute paradigm in the datacenter, or even extending it or augmenting it in some fashion, is no easy task. A company, even one that has raised $720 million in seven rounds of funding in the past six years, has to be careful to not try to do too much too fast and lose focus while at the same time adapting to the conditions in the field to get its machines doing real work and proving their worth on tough tasks.

This is where machine learning upstart and wafer-scale computing pioneer Cerebras Systems finds itself today, and it does not have the benefit of ubiquity that the Intel X86 architecture or the relative ubiquity that the Nvidia GPU architecture have had as they challenged the incumbents in datacenter compute in the 1990s and the 2010s, respectively.

If you wanted to write software to do distributed computing on these architectures, you could start with a laptop and then scale the code across progressively larger clusters of machines. But the AI engines created by Cerebras and its remaining rivals, SambaNova Systems and Graphcore and possibly Intels Habana line, are large and expensive machines. Luckily, we live in a world that has become accustomed to cloud computing, and now it is perfectly acceptable to do timesharing on such machines to test ideas out.

This is precisely what Cerebras is doing as it stands up a 13.5 million core AI supercomputer nicknamed Andromeda in a colocation facility run by Colovore in Santa Clara, the very heart of Silicon Valley.

This machine, which would cost just under $30 million if you had to buy it, is being rented by dozens of customers who are paying to use it to train on a per-model basis with cloud-like pricing, Andrew Feldman, one of the companys co-founders and its chief executive officer, tells The Next Platform. There are a bunch of academics who have access to the cluster as well. The capacity on Andromeda is not as cheap and easy as running CUDA on an Nvidia GPU embedded on a laptop in 2008, but it is about as close as you can get with a wafer-scale processor that would not fit inside of a normal server chassis, much less a laptop.

This is similar to the approach that rival SambaNova Systems has taken, but as we explained when talking to the companys founders back in July, a lot of customers are going even further and are tapping SambaNovas expertise in training foundation models for specific use cases as well as renting capacity on its machines to do their training runs.

This approach, which we think all of the remaining important AI training hardware vendors will need to take that would be Cerebras, SambaNova, Graphcore, and, if you want to be generous, Intels Habana Labs division (if it doesnt shut it down as part of its looming cost cuts) is not so much a cloud or hosting consumption model as it is the approach IBM took in the 1960s at the dawn of the mainframe era with its System/360s. Back then, you bought a machine and you got white glove service and programming included with the very expensive services because so few people understood how to program applications and create databases underneath them.

Andromeda is, we think, a first step in this direction for Cerebras, whose customers are very large enterprises and HPC centers who already have plenty of AI expertise. But the next and larger round of customers the ones that will constitute a real revenue stream and possibly profits for Cerebras and its AI hardware peers are going to want access not just to flops, but deep expertise so models can be created and trained for very specific workloads as quickly as possible.

Here are the basic feeds and speeds of the Andromeda system:

Each of the CS-2 nodes in the Andromeda cluster has four 64-core AMD Epyc processors in it that do housekeeping tasks for each of the WSE-2 wafers, which have 2.6 trillion transistors implementing 850,000 cores and their associated 40 GB of SRAM. That embedded SRAM memory on the die has 20 PB/sec of aggregate bandwidth, and the fabric implemented between the cores on the wafer has an aggregate bandwidth of 220 Pb/sec. Cerebras calls this mesh fabric linking the cores SwarmX, and a year ago this interconnect was extended over a dozen 100 Gb/sec Ethernet transports to allow the linking of up to 192 CS-2 systems into a single system image. Across those 16 CS-2 machines, the interconnect fabric has 96.8 Tb/sec of aggregate bandwidth.

Just like you can plug FPGAs together with high speed interconnects and run a circuit simulation as a single logical unit because of the high speed SerDes that wrap around the FPGA pool of configurable logic gates, the Cerebras architecture uses the extended SwarmX interconnect to link the AI engines together so they can train very large models across up to 163 million cores. Feldman says that Cerebras has yet to build such a system and that this scale has been validated thus far only in its simulators.

That SwarmX fabric has also been extended out to what is essentially a memory area network, called MemoryX, that stores model parameter weights and broadcast them to one or more CS-2 systems. The SwarmX fabric also reduces gradients from the CS-2 machines as they do their training runs. So the raw data from training sets and the model weights that drive the training are disaggregated. In prior GPU architectures, the training data and model weights have been in GPU memory, but with fast interconnects between CPUs and GPUs and the fatter memory of the CPU, data is being pushed out to the host nodes. Cerebras is just aggregating parameter weights in a special network-attached memory server. The SwarmX fabric has enough bandwidth and low enough latency mainly because it is actually not running the Ethernet protocol, but a very low latency proprietary protocol to quickly stream weights into each CS-2 machine.

By contrast, the 1.69 exaflops Frontier supercomputer at Oak Ridge National Laboratories has 8.73 million CPU cores and GPU streaming multiprocessors (the GPU equivalent to a core), and 8.14 million of those are the GPU SMs that comprise 97.7 percent of the floating point capacity. At the same FP16 precision that is the high end for the Cerebras precision, Frontier would weigh in at 6.76 exaflops across those GPU cores. AMD does not yet have sparse matrix support for its GPUs, but we strongly suspect that will double the performance as is the case with Nvidia Ampere A100 and Hopper H100 GPU accelerators when the Instinct MI300 GPU accelerators which we will start codenaming Provolone as the companion to Genoa CPUs if AMD doesnt give us a nickname soon ship next year.

In any event, as Frontier is with its Instinct MI250X GPUs, you get 6.76 exaflops of aggregate peak FP16 for $600 million, which works out to $88.75 per teraflops for either dense or sparse matrices. (We are not including the electric bill for power and cooling, the cost of storage just the core system.)

Thats a lot of flops compared to the 16-node Andromeda machine, which only drives 120 petaflops at FP16 precision with dense matrices but very much importantly delivers close to 1 exaflops with the kind of sparse matrix data that is common with the foundational large language models that are driving AI today.

Hold on. Why is Cerebras getting an 8X boost for its sparsity support when Nvidia is only getting a 2X boost? We dont know yet, but we just noticed that and are trying to find out.

The WSE-2 compute engine only supports quarter precision FP16 and half precision FP32 math, plus a proprietary format called CB16 floating point format that has 6 bit exponents; regular IEEE FP16 has 5 bit exponents, and the BF16 format from Googles Brain division has 8 bit exponents which makes it easier to convert to FP32 formats. So there is no extra boost coming from further reduced precision down to, say, FP8, FP4, or FP2. As far as we know.

At $30 million, the 16-node Andromeda cluster costs $250 flat per teraflops for dense matrices, but only $31.25 per teraflops with sparse matrices. It only burns 500 kilowatts, compared to the 21.9 megawatts of Frontier, too.

But here is the real cost savings: A whole lot less grief. Because GPUs are relatively small devices at least compared with an entire wafer with 850,000 cores running large machine learning models means chopping up datasets and using a mix of model parallelism (running different layers of the model on different GPUs that have to communicate over the interconnect) and data parallelism (running different portions of the training sets on each device and doing all of the work of the model on each device individually). Because the WSE-2 chip has so many cores and so much memory, the training set can fit in the SRAM and Cerebras only has to do data parallelism and only calculates one set of gradients on that dataset rather than having to average them across tens of thousands of GPUs. This makes it much easier to train an AI model, and because of the SwarmX interconnect, the model can scale nearly linearly with training data and parameter count, and because the weights are propagated using the dedicated MemoryX memory server, getting weights to all of the machines is also not a problem.

Today, we can support 9 trillion parameter models on one CS-2, says Feldman. It takes a long time, but the compiler can work through them and it can place work and we can store it using MemoryX and SwarmX. We dont do model parallelism because our wafer is so big that we dont have to. We extract all of the parallelism by being strictly data parallel, and that is the beauty of this.

To be honest, one of us (Tim) did not fully appreciate the initial architecture choice Cerebras made and the changes announced to it last year, while the one other of us (Nicole) did. Thats why we are each others co-processor. . . .

To be very clear, Cerebras is not doing model parallelism across those 16 CS-2 nodes in any fashion. Youi chop the dataset into the same number of pieces as the nodes you have. The SwarmX and MemoryX work together to accumulate the weights of the model for the 16 nodes, each with their piece of the training data, but the whole model runs entirely on that subset of data within one machine and then the SwarmX network averages the gradients and stores the final weights on the MemoryX device. Like this:

The scaling that the Andromeda machine it was very hard to not say Strain there is seeing is damned near linear across a wide variety of GPT models from OpenAI:

With each increase in scale, the time it takes to train a model is proportionately reduced, and this is important because training times on models with tens of billions of parameters are still on the order of days to months. If you can chip that by a factor of 16X, it might be worth it particularly if you have a business that requires retraining often.

Here is the thing. The sequence lengths a gauge of the resolution of the data keep getting longer and longer to provide more context for the machine learning training. AI inference might have a sequence length of 128 or 256 or 384, but rarely 1,024 but training sequence lengths can be much higher. In the table above, the 1.3 billion GPT-3 and 25 billion GPT-J runs had 10,240 sequence lengths, and the current CS-2 architecture can support close to 50,000 sequence lengths. When Argonne National Laboratory pit a cluster of a dozen CS-2s against the 2,000-GPU Polaris cluster, which is based on Nvidia Ampere A100 GPUs and AMD Epyc 7003 CPUs, Polaris could not even run the 2.5 billion and 25 billion GPT-3 models at the 10,240 sequence level. And on some tests, where a 600 GPU partition of Polaris was pit against the Dozen machines, it took more than a week for the Polaris system to converge when using a large language model to predict the behavior of the coronavirus genome, but the Cerebras clusters AI training converged in less than a day, according to Feldman.

The grief of using Andromeda is also lower in another way: It costs less than using GPUs in the cloud.

Just because Andromeda costs around $30 million to buy doesnt mean that a timeslice of the machine is proportional to its cost, any more than the price that Amazon Web Services pays for a server directly reflects the cost of an EC2 instance sold from the cloud. GPU capacity costs are all over the map on the clouds, on the order of $4 to $6 an hour per GPU on the big clouds, and for an equivalent amount of training for GPT-3 models Feldman says that the Andromeda setup could cost half of that of GPUs and sometimes a lot less, depending on the situation.

At least for now, Cerebras is seeing a lot of action as an AI accelerator for established HPC systems, often machines accelerated by GPUs and doing a mix of simulation and modeling as well as AI training and inference. And Feldman thinks it is absolutely normal that organizations of all kinds and sizes will be using a mix of machinery a workflow of machinery, in fact instead of trying to do everything on one architecture.

It is interesting to me that this sounds like a big idea, says Feldman with a laugh. We build a bunch of different cars to do different jobs. You have a minivan to go to Grandmas house and soccer practice, but it is terrible for carrying 2x4s and 50 pound bags of concrete and a truck isnt. And you want a different machine to have fun, or to haul logs, or whatever. But the idea that we can have one machine and drive its utilization up to near 100 percent is out the window. And what we will have are computational pipelines, a kind of factory view of big compute.

And that also means, by the way, that keeping a collection of machines busy all the time and getting the maximum value out of the investment is probably also out the window. We will be lucky, says Feldman, if this collection of machinery gets anywhere between 30 percent and 40 percent utilization. But this will be the only way to get all kinds of work done in a timely fashion.

Follow this link:

Cerebras Wants Its Piece Of An Increasingly Heterogenous HPC World - The Next Platform

Categories
Cloud Hosting

Melita Business showcasing Cloud and Fibre connectivity at SiGMA – Times of Malta

Melita Business will be showcasing its Fibre connectivity and Cloud services, along with colocation and end-to-end network infrastructure management at the SiGMA Summit, which is the leading forum for the iGaming Industry.

Melita Business experts will be on-hand to talk about disaster recovery, backup and cloud hosting solutions, all complemented by high speed fibre connectivity with fully redundant international links connecting to the worlds leading internet carriers in Milan.

The iGaming industry accounts for around 10 per cent of Maltas GDP, making it an important player in the economy.

Malcolm Briffa, Director of Business Services at Melita explained: Alongside our hosting and cloud services, the Melita Business team will also be sharing best practices, solutions, and professional advice to iGaming companies that require fast and reliable connectivity through dedicated fibre internet, or international private links, available for companies located across Malta and Gozo.

Malta has long established itself as a leading remote gaming jurisdiction with an efficient licensing process and a swift regulatory system. Thanks to its adaptive responsiveness to the iGaming industry, the country now boasts the largest number of licensed operators in the world. The Melita Business team will be displaying the companys future-proof technology and hard-earned expertise on Stand C01. Dedicated consultation sessions can reserved at sales@melitabusiness.com.

Independent journalism costs money. Support Times of Malta for the price of a coffee.

See the article here:

Melita Business showcasing Cloud and Fibre connectivity at SiGMA - Times of Malta

Categories
Cloud Hosting

Built to Linux Cloud VPS Server Hosting With SSD and Control Panel Via Onlive Server – Digital Journal

Onlive Server offers the Cloud VPS Server that can offer you the perfect balance of power and flexibility, and USA Server Hosting has some of the best options in the business. In this blog post, well look at some of the reasons this server could be the right choice for your website and some of the top features our servers can offer.

A VPS, or Virtual Private Server, is a server that runs in a cloud data center and can be used just like a physical server would be. The benefits of using virtual servers include flexibility, scalability, cost-efficiency, and reliability. A Cloud USA VPS Server is less expensive than a physical one because you only need to pay for the resources that you use.

Scalability: USA VPS is quickly scaled up or down as needed, so you only pay for the resources you use.

Flexibility: With this kind of server, youre not tied to any one physical location you can easily move your server to another data center if needed.

Reliability: These servers are designed to be highly available and can tolerate failures of individual components without affecting your website.

Benefits of Using a Cloud VPS?

Using cloud USA VPS Hosting to boost your website performance has many benefits. Perhaps the most obvious benefit is that you can scale your website without worrying about capacity issues. It also offers great flexibility since you can easily add or remove resources as your website demands change. Finally, it can be a great cost-saving measure since you only pay for the resources you use.

More Scalable

The cloud is more scalable than traditional VPS hosting. With USA VPS Server, you are limited by the size of the server you are using. With the best and most affordable cloud server hosting, you can easily scale up or down as needed.

Provides Top-Class Security

USA VPS provides top-class security to its customers with the help of the latest technologies and a team of highly skilled security professionals.

Media ContactOnlive Server Private Limited+91 6387659722[emailprotected]

Follow the full story here: https://przen.com/pr/33484979

See original here:

Built to Linux Cloud VPS Server Hosting With SSD and Control Panel Via Onlive Server - Digital Journal

Categories
Cloud Hosting

Week in Insights: More Is Merrier at Thanksgiving – Bloomberg Tax

In a few days, my house is going to be loud. I am totally looking forward to it.

It wont start as a roar but more of a gradual turning up of the volume. My daughters will arrive home from college at the beginning of the week. Shortly after, my brother and his family will drive in from Connecticut, while my parents will take a little longer to arrive from North Carolina.

By the time Thanksgiving arrives, Ill be hosting 26 people for dinner. Yes, 26.

Cheers to this great Thanksgiving dinner!

Stock photo via Getty Images

Its not all family. We also welcome international students to our home over the holidays. My law school alma mater (shout out to Temple U!) coordinates a Thanksgiving dinner program every year. Recognizing that its difficult for international students to travel home over the breakand that the dorms will likely be emptythey match as many students with local host families as they have spots available.

For the past several years, weve hosted two or three students. Sometimes, they come back in subsequent years; we made great friends with a family who returned and brought their young son. But this year, were hosting sevenyoure allowed to cap the number of students you can comfortably host, and we didnt submit a limit.

When I saw the list, I was a little overwhelmed. But then I realized that I couldnt say no. We have the space. We have the resources to provide dinner. We have a great family who loves to meet new people. And I thought about my girls in college and what it would be like if they were alone during the holidays. So, I called our local party rental company, ordered an additional table and some chairs, and started reworking my menu. I dont regret it for a minute.

Over the years, some of our best conversations have been across those dinner tables. I love hearing about what its like to grow up in other cultures and what other countries may think about the US and its various government systemsof course, I ask about tax whenever I can. I never walk away without learning something new.

I think we always learn the most from each other, whether that happens over a dinner table, in an office, or on the internet. I hope youll do that in your own life, too, whether its taking the time to talk with a colleague, pen an article for us, connecting on social mediaor even inviting students into your home. (Check to see if theres a program in your area.)

At Bloomberg Tax, we aim to make it easy for you to share and receive information. Our experts offer great commentary and insightful analysis on federal, state, and international tax issues to keep you in the knowand thats definitely something to be thankful for!

The Exchange Its where great ideas intersect.

Kelly Phillips Erb

True or false: If your employer gives you a Thanksgiving turkey or gift card to buy a turkey, its excludable from taxable income.Answer at the bottom.

How much should your firm or practice get involved in political issues, if at all? Should your business contribute to an elected official, political candidate, or a political cause or take a political candidate or official as a client? If you do, what questions could you face from your stakeholders?

Find the answers to that question and more by joining Bloomberg Tax and Bloomberg Law Insights & Commentary teams on Wednesday, Nov. 30, from noon to 1 p.m. ET for Should Your Company Take a Stand on Political and Social Issues? part of our free virtual Lunch & Learn series. Two attorneys from Skadden, Arps, Slate, Meagher & Flom LLP will lead a discussion about political and social issues in the workplace,

You can join us for this event, no registration required, by signing on here at noon ET on Nov. 30 or by calling +1-929-205-6099 US and entering the meeting ID: 975 6437 0979.

Sgt. Brian Ellis of Mt. Airy, Md. (L) shares a laugh with Sgt. Cesar Romero of Brentwood, N.Y. during a Thanksgiving lunch, Nov. 27, 2003, at field base St. Mere near Fallujah, Iraq.

When the pricing and financial performance of a sports franchise are misaligned, external financing may not be advantageous or practical. But understanding the tax impact of the terms of a deal is simply adhering to the strategy that the best offense is a good defense, say RSM US LLPs Amanda Hodgson, Jamie Sanders, and Justin Krieger.

President Joe Biden has nominated Daniel Werfel as its next commissioner. With the right leadership and oversight, the agency can deliver the 21st century service that all taxpayers have the right to expect while taking a big bite out of the $600 billion-plus annual tax gap, say former IRS commissioners Fred Goldberg and Charles O. Rossotti.

Cloud-based software-as-a-service business models are enabling rapid growth, and the accounting industry needs to adapt. Stouts Steve Sahara, Jeremy Krasner, Brad Burch, Kevin Pierce, and Joe Randolph share some important aspects of SaaS revenue recognition.

The IRS has been using John Doe summonses in its digital asset enforcement since 2016, when it first attempted to identify crypto exchange customers. Taxpayers usually lost in court objecting to the summonses, but Ropes & Gray attorneys discuss a novel strategy in a federal appeals court.

The IRS new compliance program gives any retirement plan sponsor targeted for examination a 90-day review period to determine if they satisfy all tax law requirements. Best Best & Krieger LLPs Susan Neethling and Helen Byrens detail the pilot program and share some next steps for plan sponsors.

Incentive stock options provide executives with various tax benefits, but how do you know when to exercise and sell the underlying ISO stock? Alyssa Rausch of EisnerAmper summarizes the basic tax rules and common tax strategies.

Foreign-incorporated cruise lines based in US ports have often been exempted from US income tax due to Section 883. In a two-part article, Gunster, Yoakley & Stewart, P.A.'s Alan S. Lederman discusses whether US corporate alternative minimum tax could apply to these cruise lines or whether the proposed OECD Pillar Two minimum tax will impose a minimum tax regime on such cruise lines.

Christos Theophilou of Taxand explains that multinational enterprises need to have adequate preparation in place to satisfy the scrutiny of intra-group services by tax authorities, and he provides a practical example and case study to illustrate the issues.

As discussion of how to tackle the global challenge of climate change continues at COP27, Chris Morgan of KPMG considers current national approaches, including tax measures, and suggests more flexibility for countries to decide what approach is most suited to their individual needs.

Rob Janering of Crowe looks at the value-added tax compliance considerations for UK organizations supplying digital events and discusses the impact of the upcoming changes in the EUs position on supplies of live online services.

A profit-per-employee tax could go a long way to support the American workforce and to ensure that Big Tech factors the human cost of business into their plans as much as profits, says writer Hassan Tyler.

At The Exchange, we welcome responses from our readers and encourage diversity and civil discussion. We are especially interested in responses that add to the conversation, or introduce a different point of view. If you have a response to one of our published Insights, wed love to hear from you.

Nonfungible tokens hold intrinsic value due to their digital properties and traits. In this edition of A Closer Look, Stouts Fotis Konstantinidis looks at the challenges of valuing NFTs, as well as the methods and data used for valuation.

Journalists make a Thanksgiving toast at the UN Club Nov. 22, 2001, in Islamabad, Pakistan.

Photographer: Paula Bronstein via Getty Images

Whats on Bloomberg Tax Insights wish list right now? For December, were hoping to end the year on a high note with a wrap-up of the year and a peek ahead. Were interested in: What did you think was newsworthy in tax in 2022? What should we look out for in 2023? What should tax professionals do now to prepare for next tax season? Were looking for a thoughtful take that will get tax professionals talking about next year...even before the calendar flips over.

Our Insights articlesabout 1,000 wordsare written by tax professionals offering expert analysis on current tax practice and policy issues, tax trends and topics, and tax and accounting firm practice and management. If you have an interesting, never-published article for publication, wed love to hear about it. You can contact our Insights team at TaxInsights@bloombergindustry.com.

Private equity firm Cinvens recent $720 million acquisition of tax preparation service TaxAct is the latest example of the changing landscape around mergers and acquisitions. But filing your taxes should be free and not involved with entities motivated by profitfull stop, says columnist Andrew Leahey.

Despite numerous controversies, the 2022 World Cup in Qatar is expected to generate billions of dollars. I take a look at where the money comes from, where it goes, and how it might be taxed.

As boiler room schemes gather steam across the globe, its important to remain alert. In my column, I explain some tips to consider during International Fraud Awareness Week.

Its been a busy week in tax news from state capitals to Washington. Here are some stories you might have missed from our Bloomberg Tax news team.*Note: Your Bloomberg Tax login will be required to access Tax News.

Carlos Martinez has joined White & Case as a partner in the tax practice in Mexico City, the firm said.

Martn Guzmn has been appointed to the role of commissioner by the Independent Commission for the Reform of International Corporate Taxation, according to a news release.

RKL LLP has added Thomas Romano and Jennifer Witmer as managers in the tax services group in Pennsylvania, the advisory firm said.

Leila Vaughan has rejoined Faegre Drinker as a partner in the investment management practice group in Philadelphia, the firm said.

If you are changing jobs or being promoted, let us know. You can email your submission to TaxMoves@bloombergindustry.com for consideration.

Our Spotlight series highlights the careers and lives of tax professionals across the globe. This weeks Spotlight is on Caroline Cao, a partner at Lewis Brisbois Bisgaard & Smith, LLP in Sacramento, Calif.

A Chicago Police officer carries a tray of pre-Thanksgiving meals to waiting guests Nov. 21, 2002 at the Columbus Park Refectory in Chicago.

Photographer: Tim Boyle via Getty Images

False. If your employer gives you a turkey, ham, or other item of nominal value at Thanksgiving, Christmas or other holidays, that is excludable from income. However, if your employer gives you cash, gift card, or a similar item that you can easily exchange for casheven if you promise to buy a turkeythats taxable regardless of the amount.

We talk about tax a lot. But theres a lot more that you might hear us talking about if you popped into one of our Teams meetings. Heres a quick look at what some of us are watching, reading, and listening to this week:

Watching:

Reading:

Listening:

Sign up for your free copy of our newsletter delivered to your inbox each week. Just head over to The Exchange and sign up using the green Free Newsletter Signup box at the topor just go directly to the newsletter sign-up page.

Your feedback and suggestions are important to us, so dont hesitate to reach out on social or email me directly at kerb@bloombergindustry.com.

See the original post:

Week in Insights: More Is Merrier at Thanksgiving - Bloomberg Tax

Categories
Cloud Hosting

4 tech investment trends to watch out for in 2023 – GrowthBusiness.co.uk

From a pricing and valuation perspective, its been a difficult year for tech stocks. Even blue-chip names such as Microsoft, Apple and NVIDIA have faced material price corrections on the financial markets. At one point in mid-October, Morningstars US Technology Sector index was down more than a third on the beginning of the year. Even after a slight rally, it remained a quarter lower in early November.

However, technology businesses continue to demonstrate good levels of growth as the world becomes increasingly digitised. Additionally, many tech businesses have resilient characteristics meaning that they remain attractive to investors.

>See also: Five venture capital trends for start-ups to follow in 2022

There are good reasons to be optimistic for the future of tech. For a start, there have been several years of strong growth and not just among the giants. Last year was the best-ever for the UK tech industry in terms of investment, with the sector securing 29.4bn in funding. As the Department for Digital, Culture, Media & Sport, summed up: More VC investment, more unicorns, more jobs and more futurecorns.

Perhaps more importantly, as businesses face up to a challenging environment, theyll rely on technology more than ever to secure the efficiencies and opportunities necessary to survive and even thrive in a tougher climate. This should ensure that certain niches of the technology ecosystem will continue to see growth.

>See also: Why its a good time to invest in UK start-ups if youre a dollar investor

Four tech investment trends in particular stand out.

In an inflationary market, many businesses will be looking to drive efficiencies. This will lead to continued and growing demand for AI, machine learning and the automation which comes with it. Reducing reliance on human resources and boosting efficiency will always be popular as labour and other costs rise, but what we are seeing now is applications going well beyond simply streamlining processes.

For example, Mobysoft, an ECI investment, uses predictive analytics to help social housing providers keep their tenants in well-maintained homes, while improving rent collection and reducing arrears.

AI can also help drive new business. CPOMS, a former ECI investment, provides a good example. It used ALTERYX analytics software to create a new business identification model, prioritising prospective customers by analysing the most common features of its existing user base. Such tools will be valuable for businesses looking to source new revenue.

The cloud also offers a significant opportunity for businesses to both cut costs and develop new capabilities. Public cloud hosting allows businesses to rapidly scale up or down their operations without incurring significant capital expenditure, which can prove useful either for investing additional free cash in growth initiatives or in taking defensive action in more scenarios. The old financial adage of cash is king remains as true today as it ever did.

Moreover, the range of cloud services and tooling offered by the hyperscalers is growing. For example, Microsoft has launched an IoT Hub in its Azure platform, which enables companies to construct customised solutions for complex scenarios to facilitate IoT use cases. This is likely to become even more useful as the range of potential applications of IoT expands with the rollout of the 5G spectrum and the increasing prevalence of low-power IoT networks.

Crucially, public cloud platforms offer businesses the ability to keep tighter control of their fixed costs, both in terms of the technology, the internal IT capability and the floor space which historically may have been used to house on-premise infrastructure. This capability point will be particularly valuable at a time when such skills are expensive and in short supply.

Learn to code was once the default advice for employees hit by redundancy or those who found themselves in a declining industry. More recently, it has become a significantly in-demand skill as SaaS and software has become increasingly prevalent. However, while historically it was imperative for a developer to learn one, if not several, distinct coding languages, the increasing development of low-code platforms should increasingly democratise and make it simpler for non-technical individuals to create products and applications without having to learn a language.

This trend will allow a greater range of businesses to produce software and accelerate products to market by simplifying development. It may also help address shortages in the number of developers given the current war for talent in this space.

Finally, cybersecurity is certain to remain a priority for businesses and individuals, regardless of how the economy performs, particularly given the increasing number of malicious individual or state actors. Notably, there have been massive global increases in the use of ransomware to extract capital from afflicted businesses. There are even guides on how to launch ransomware attacks on the dark web, so this is an increasingly important and sadly frequent issue that businesses have to face.

High-profile attacks tend to make headlines. The most recent being a severe attack on Uber in September 2022, which was started by a hacker who manipulated an employee into sharing their password through a remote access portal on their mobile phone. Via this one small error, the hacker was able to gain access to the companys critical infrastructure. However, it is not just high-profile corporates that are at risk. The threat is ubiquitous with attackers targeting businesses of all sizes and on occasion for relatively small sums of money.

In the face of an increasing frequency of cyber-attacks it is imperative that businesses protect their digital assets, IP and customers data. This creates a beneficial backdrop for cyber security businesses to grow and create value for shareholders.

Businesses should continue to invest in these areas to ensure they are best placed for the future. Technologies and tech services providers in these areas are likely to thrive, with strong prospects for growth and valuations.

Daniel Bailey is investment director at ECI Partners

Who are the UKs next unicorns?

See the rest here:

4 tech investment trends to watch out for in 2023 - GrowthBusiness.co.uk

Categories
Co-location

Scott Gould, VP of Business Operations at Element Critical – Spiceworks News and Insights

Hybrid or remote work may have been born out of necessity, but the work model has made an indelible imprint and becomes part of the corporate culture. Scott Gould, VP of business operations at Element Critical, shares how enterprises can tackle the challenges and reap the benefits of a hybrid workforce.

According to Pew Research Center, 59% of employees work from home all or most of the time. As employees continue to assert their choice to work from home, remote work is yet another force that is concurrently pushing organizations to increase digital business transformation efforts.

Ladders CEO, Marc Cenedella, has suggested that this massive shift from office to remote work is Americas most significant societal change since the end of World War II. Whether businesses embrace the shift by going fully remote or balancing a hybrid model, the emerging extended enterprise offers an array of possibilities for employers and employees alike.

Businesses must overcome some challenges to leverage these benefits. Challenges can range from how to deliver both real-time and enriching interactions for geographically distributed employees to fostering IT security amid changing circumstances. Here are a few examples of obstacles businesses must address and the benefits they hope to achieve.

See More: Why Colocation Is the Best Bet for Reliable and Cost-Effective Data Storage

Just as remote work is expanding the workplace landscape, the IT infrastructure supporting businesses and employees has undergone concurrent transformational shifts. The former centralized computing strategy where businesses hosted their IT stack in a single location has also gone hybrid.

Since the dawn of digital business, organizations have needed a place to store data, applications, and computing. This IT infrastructure, referred to as a data center, can be housed onsite at the headquarters, in branch offices, hosted in a colocation data center, or in the cloud. In the past, many businesses were supported by a single IT compute/storage environment. Businesses now have IT resources spread across a variety of data center environments. Even companies implementing a cloud-only strategy at the onset of the pandemic are repatriating data or evolving into a hybrid cloud strategy.

Hybrid cloud strategies are defined by the simultaneous utilization of public clouds and colocation or on-premises data centers. Often, hybrid cloud strategies are pursued because they allow organizations to utilize the public clouds scalability while keeping highly sensitive data secure on a private network.

Alternatively, multi-cloud strategies are when an organization utilizes a combination of cloud providers which can be two or more public clouds, two or more private clouds (colocation or on-premises), or a combination of public, private, and edge clouds to distribute applications and services. This allows businesses to utilize the cloud services they need while leveraging the stability and durability of colocation to support foundational IT architectures.

IT leaders realize a cloud-only strategy is expensive and insufficient to meet all the needs of todays businesses. The rising tide of companies pulling workloads out of the cloud is motivated to do so for various reasons, including uptime concerns that affect brand protection, unsanctioned use of the public cloud, information security concerns, application lifecycle considerations, governance requirements, and data sovereignty.

Under a hybrid cloud solution, colocation data centers in key locations can offer the best environment to ensure high-quality connectivity between onsite/edge infrastructure and private and public clouds while addressing some of the top cloud computing challenges.

Highly connected colocation providers, with private network solutions and direct cloud connections, enable businesses to take advantage of what the cloud offers, such as speed and flexibility, while at the same time enjoying the benefits of greater uptime, resilience, control, and the additional security of the colocation data centers.

The new modern workplace requires bandwidth, security, and flexibility wherever employees and infrastructure reside. The bottom line is that building a workplace that meets employees connectivity and productivity requirements for real-time or asynchronous engagement ultimately means investing in digital technologies.

For some companies achieving these results may mean infusing native data center software & applications, including SaaS options, into their modern IT solution. Such adjustments will improve how employees work remotely, work internally, and deliver external services to the customer. Companies can also invest in tools to reduce security risks, such as adding two-factor authentication and encryption to devices, so confidential information is only available via virtual private networks and encrypted end-to-end systems.

For most companies, the bottom line is that having employees work outside the office goes beyond freeing up office space. This is just the first step toward the evolution of their IT strategy.

A remote business workforce built upon a Hybrid IT environment allows businesses to hire highly-skilled, technical leaders able to throttle their business solutions into high gear without being geographically limited to local-only staffing.

The pandemic certainly showed CIOs and IT leaders that modern business continuity requires IT departments and infrastructure built for adaptability. Emerging technology and connectivity tools can transform commerce and our lifestyles, even changing the paradigms of how and where we work. Yet they also need to be built upon increasingly connected data center architectures.

How are you ensuring that your IT infrastructure is adaptable and can support the demands of hybrid work? Share with us on Facebook, Twitter, and LinkedIn.

Image Source: Shutterstock

Read more:

Scott Gould, VP of Business Operations at Element Critical - Spiceworks News and Insights

Categories
Co-location

Co-Location Plays A Big Role In Hybrid Cloud, Too – The Next Platform

In the ongoing discussions about the still-evolving world of hybrid cloud, the focus tends to be on what enterprises are doing within their own on-premises datacenters and private clouds and their work with public cloud players like Amazon Web Services, Microsoft Azure, and Google Cloud.

Lost at times among this hybrid cloud talk is the growing complementary role of co-location facilities, those sites that can provide organizations with a cloud-like experience that can be less costly than a public cloud and offer strong security, high performance, and low latency. In addition, their varied locations can address local regulatory needs as well as enterprise demands as they move more of their compute and storage out to the edge to be nearer to where the data is being created.

VMware and co-location giant Equinix see an opportunity to address those needs. The two companies have been partnering since 2013 making VMware technology available in Equinix datacenters around the world. The companies have more than 3,000 joint customers, with many who are looking for ways to bring the performance and access they have in their distributed multicloud world but in an as-a-service manner, according to Zachary Smith, global head of edge infrastructure services at Equinix.

To this end, the two companies at the VMware Explore 2022 Europe show in Barcelona on Tuesday unveiled VMware Cloud on Equinix Metal, combining VMwares expansive cloud offerings and Equinix bare metal-as-a-service, one of several cloud-related announcements VMware is making at the event.

The goal here is to bring cloud-style experiences to the metro-location reach of Equinix, Smith said in a briefing with journalists and analysts. We are hearing from enterprises that they want to have access to those locations across the world with that latency-sensitive, high-performance workload but with the ease and consumptive model of VMware Cloud. Were helping to move that workload to the edge, where thousands of enterprises and service providers are connecting. This is where people can really access that mission-critical data-heavy workload in our metro locations and interconnected across to their cloud workloads, to their on-prem, and to the rest of their ecosystem partners, and to do so with an operating model that theyre very comfortable with.

The offering will preserve the single-tenant and location-specific assurance organizations are used to in their own datacenters but in a fully managed environment, he said.

Equinix, with its more than 240 highly interconnected datacenters (via the companys Platform Equinix) in 71 markets around the world, is a top player along with the likes of Digital Realty and DigitalBridge, in a global co-location market that could grow from more than $46 billion two years ago to almost $203 billion by 2030.

Its growth strategy has been fueled in part by an aggressive acquisition strategy that includes its $3.8 billion acquisition of Telecity and $3.6 billion for 24 Verizon datacenters, both in 2016. Four years later, the company bought bare-metal automation specialist Packet for $335 million, giving Equinix a path to the edge through Packets capabilities to automate single-tenant hardware.

The Packet technology and the investment Equinix has put into it over the past two-plus years was key to what Equinix and VMware are offering now, Smith said.

A DNA that Packet brought was a high amount of automation around physical infrastructure, which really unlocked this ability for us to create experience, he said. VMware Cloud has done such a great job at creating a first-class, trusted, works-everywhere experience that requires a significant amount of infrastructure substrate, at least from the way we wanted to craft this experience. That DNA and programmability around a physical datacenter has allowed us to take this step.

Expanding the partnership with Equinix made sense for VMware, a company with deep roots in the datacenter but which has aggressively been pushing out to the cloud and, more recently, the edge, with the goal of being the essential technology vendor in an increasingly distributed IT world.

In the on-prem world, customers enjoy a lot more security and data sovereignty, Narayan Bharadwaj, vice president of cloud solutions at VMware, said during the briefing. They have a lot of control and they continue to run a lot of data-intensive, latency-sensitive applications in that particular world. They also enjoy the flexibility, agility and some of the innovation that the public cloud offers. The ask from customers and many of our partners is, How do we bring this all together? How do we create that on-demand model that the public cloud really pioneered, but then build that in with the performance, data-latency sensitive and the enterprise assurance that all our customers look for?

VMware for several years has been building its cloud capabilities through such foundational offerings as vSphere, vSAN storage, NSX networking, the Aria cloud management portfolio, and its two-year-old Project Monterrey, a suite for managing virtual machines and containers in a hybrid cloud environment. It also has developed relationships with the hyperscale cloud providers, particularly AWS but also Azure and Google. The partnership with AWS has been a cornerstone of VMwares cloud ambitions and the Equinix bare metal-as-a-service deal expands what VMware can do, Bharadwaj said.

There are many use cases that customers think through for different types of applications that demands different locations and different providers and hardware types, he said. From a solution standpoint, VMware is presenting a very consistent solution that customers do enjoy today on VMware Cloud on AWS. It has its own differentiators in that model, in its choice of hardware, different locations, etc. With the Equinix relationship, it has other types of differentiation that are very, very unique. We have seen customers because its the VMware technology that allows for that going to the public cloud, coming back on-prem for some workloads [and to] co-location as well. As long as its on the VMware stack with the hardware compatibility, all of the hard engineering that we have done under the covers, we see customers adopting all kinds of distributed strategies. Its really application-driven.

The companies said the use cases for the joint VMware-Equinix service range from smart cities and video analytics to financial market trading, point-of-sale in retail, and workloads using artificial intelligence in the datacenter and at the edge. It also will help enterprises trying to find an as-a-service home for mission-critical workloads, Smith said.

They need really high-performing infrastructure connected to private networks as well as public clouds so that they can move these mission-critical data-heavy workloads into a cloud-first operating model, he said. They have network requirements. Almost everything that we see is around, How do we make better performance? How do we not backhaul as much traffic? How do we get the right data for our machine learning algorithms or for our high-intensity data apps? Bringing that compute capability and control plane of VMware Cloud to the edge allows customers to benefit from a much greater TCO and higher performance throughout their application stack.

The offering will see the VMware Cloud stack delivered as a service throughout Equinixs Business Exchange (IBX) datacenter platform and providing low-latency access to public and private clouds and IT and network providers through the private Equinix Fabric interconnection.

Enterprises will pay VMware for its cloud software-as-a-service and Equinix for the bare metal-as-a-service capacity.

All this comes amid the ongoing bid by Broadcom to buy VMware for about $61 billion, a move that VMware shareholders late last week approved, pushing the deal forward.

More:

Co-Location Plays A Big Role In Hybrid Cloud, Too - The Next Platform

Categories
Co-location

Neural implementation of computational mechanisms underlying the continuous trade-off between cooperation and competition – Nature.com

Participants

The study complied with all relevant ethical regulations. The study protocol was approved by the Institute of Neuroscience and Psychology Ethics Committee at the University of Glasgow. Written informed consent was obtained in accordance with the Institute of Neuroscience and Psychology Ethics Committee at the University of Glasgow. Twenty-seven same-sex pairs of adult human participants participated in the fMRI experiment. This number was determined based on a priori estimates of sample size necessary to ensure replicability on a task of similar length97. All were recruited from the participants database of the department of Psychology at the University of Glasgow. For each couple one participant was in the scanner and the other in an adjacent room. Two pairs were removed from the analysis: one for excessive head movements inside the scanner, the other for a technical problem with the scanner. The remaining couple of participants (7 of males, 18 of females), were all right handed, had normal or corrected-to-normal vision and reported no history of psychiatric, neurological or major medical problems, and were free of psychoactive medications at the time of the study.

All participants played the Space Dilemma in pairs of two. Before starting the game they were given a set of instructions explaining that they had to imagine that they were foraging for food in a territory and asked to make a prediction about the position of the food (a straight line that represents the territory, Fig.1). They were told that in each trial the target food would appear somewhere in the territory as its position is randomly sampled from a predefined uniform probability distribution. They were shown examples of possible outcomes of a trial (Fig. 1) and they were given information about the conditions of the game. During the game, in each trial, they were presented with a bar moving across the space (representing their location) and asked to commit to a location by pressing a button while the bar passes through it while moving in the linear space. Participants therefore choose their locations in the space through the timing of a button press. They indicated their choice by pressing one of three buttons on a response box. The bar takes 4s to move from one end to the other end of the space. Once stopped, it remains at the chosen location for the remainder of the 4s. This location signalled their prediction about the target position. The two participants played simultaneously, making first their predictions and thenwatching the other players responses (for 11.5s). After both players had responded, the target would be shown (for 1.5s). Inter-trial intervals were 22.5s long. At any trial, the participant who made the best prediction (minimising the distance d to the target) was indicated as the trials winner through the colour of the target, obtaining a reward which would depend on the distance to the target: the shorter the distance the higher the reward. In the rare circumstance where players were equidistant from the target such reward was split in half between the two players who were both winners in the trial.

In order to enforce different social contexts we introduced a reward distribution rule whereby each trial reward would be shared between the winner and the loser according to the rule

$${R}_{{win}}=alpha R; , {R}_{{lose}}=left(1-alpha right)R$$

(2)

Where is a trade-off factor controlling the redistribution between winners and losers in each trial. By redistributing the reward between winner and loser the latter would also benefit from the co-player minimising their distance to the target. Increasing the amount of redistribution (decreasing below 1) constitutes an incentive to work out a cooperative strategy to decrease the average distance of the winner from the target (that is, irrespective of who the winner is) and therefore increase the reward available in each trial which would be redistributed. Decreasing the amount of redistribution can instead lead to punishment for the losers (increasing alpha above 1) adding an incentive to compete to win the trial.

All participants first participated in a behavioural session where they were randomly coupled with one another and played three sessions of the game in three different conditions specified by the value of the trade-off factor . In the first condition (=0.5, cooperative condition), the reward was shared equally between the two players, irrespective of the winner. In the second condition, the winner gets twice the amount of the reward (=2, competitive condition), while the other player will lose from their initial stock an amount equivalent to the reward. In the third condition, the winner will get the full amount of the reward and the other will get nothing (=1, intermediate condition). The participants were instructed about the different reward distribution (through a panel similar to Fig. 2c). In total, participants played 60 trials in each of the three conditions for a total of 180 trials.

At the end of the behavioural session, participants were then asked to fill in a questionnaire where their understanding of the game was assessed together with their social value orientation98. If they showed to have understood the task and were eligible for fMRI scanning they were later invited to the fMRI session which occurred 13 weeks later. In total, 81 participants took part in the behavioural session and 54 participated to the fMRI session.

In the fMRI sessions, participants were matched with an unfamiliar co-player they had not played with in the behavioural session and it was emphasised not to assume anything about their behaviour in the game. We did not use deception: participants briefly met before the experiment when a coin toss determined who would go into the scanner and who would play the game in a room adjacent to the fMRI control room. Both in the behavioural and fMRI session participants were rewarded according to their performance in the game, with a fixed fee of 6 and 8 respectively and an additional amount of money based on their task performance of up to additional 9. At the end of the fMRI sessions, participants were asked to describe what their strategy was in the different social context. Their response revealed a good understanding of the social implication of their choices (Supplementary Table4). Both in the behavioural and fMRI sessions, the order of the condition was kept constant (cooperation-competition-intermediate) as we wanted all couples to have the same history of interactions.

Visual stimuli were generated from client computers using Presentation software (Neurobehavioral Systems) controlled by a common server running the master script in MATLAB. The stimuli were presented to the players simultaneously. Each experiment was preceded by a short tutorial where players could experience a few trials in each of the three sessions to allow probing the effect of the variability in the task parameter.

We computed a payoff matrix for the Space Dilemma in the following way. Since the target position in each trial is random, the reward in each trial will also be random, but because the target position is sampled from a uniform distribution, each position in the space is associated with an expected payoff which depends on the position of the other player (Fig.1b). In a two-player game, the midpoint maximizes the chance of winning the trial. For simplicity we therefore assume that players can either compete, positioning in the middle of the space and maximizing their chance of winning, or cooperate, deviating from this position by a distance to sample the space and maximize the dyads reward. For all combinations of competitive and cooperative choice, we can build an expected (average) payoff matrix which depends parametrically on . We defined R as the expected reward for each of two players cooperating with each other, T as the expected temptation payoff for someone who decides to compete against a player who is cooperating. S is the sucker payoff for a cooperator betrayed by its partner. P is the punishment payoff when both players compete all the times. R, T, S and P can be computed analytically integrating over all possible position of the target and are equal to:

$$R=left(frac{3}{8}+frac{triangle }{2}-{triangle }^{2}right)$$

(3)

$$T=alpha left(frac{3}{8}+frac{triangle }{2}-frac{{triangle }^{2}}{8}right)+left(1-alpha right)left(frac{3}{8}-frac{5{triangle }^{2}}{8}right)$$

(4)

$$S=alpha left(frac{3}{8}-frac{5{triangle }^{2}}{8}right)+left(1-alpha right)left(frac{3}{8}+frac{triangle }{2}-frac{{triangle }^{2}}{8}right)$$

(5)

The expected reward for cooperative players R is the same in all conditions. This is because the expected reward is equal to the average of the possible rewards associated with win and loss and players who cooperate with equal have an equal chance of winning the trial.

Therefore (R=({R}_{{win}}{+R}_{{lose}})/2=(alpha {R}_{{trial}}+left(1-alpha right){R}_{{trial}})/2={R}_{{trial}})/2 which does not depend on . Likewise for the expected reward for competitive players P. When one player cooperates and the other competes however, players dont have the same chance of winning a trial and therefore T and S depend also on . For =0.5 the reward is shared equally no matter what players do so if one compete against a cooperator, they both are expected to win:

$$T=S=frac{3}{8}+frac{triangle }{4}-frac{{3triangle }^{2}}{8}$$

(7)

For =2, T diverges quickly from S as

$$T-S=frac{3}{2}left(triangle+{triangle }^{2}right)$$

(8)

We also computed the expected payoff by simulating 10000 trials of two players competing and/or cooperating by in the three conditions of the game and the results matched the analytical solutions. For the intermediate and competitive conditions, for all values of it is also true that (T>R>P>S) thus demonstrating that the Space Dilemma in these conditions is a continuous probabilistic form of Prisoners Dilemma in the strong sense. For >0.4 and in all conditions the payoff for a dyad always cooperating is always higher that for one where one player is always competing and other always cooperating or if both alternate cooperation and competition (2R>T+S), therefore for >0.4 the space dilemma is a probabilistic form of iterated prisoners dilemma. Furthermore, for all conditions the maximum payoff for the dyad is reached for =0.25.

To model the behaviour in the game we fitted eighteen different models belonging to three different classes all assuming that players implement some sort of titxtat. The first class of models (Model S1-S4) is based on the assumption that players decide their behaviour simply based on the last observed behaviour of their counterpart, by reciprocating either their last position, their last change in position, or a combination of the two. A second class of models goes further in assuming that a player learns to anticipate the co-players position in a fashion that is predicted quantitatively by a Bayesian learner (Bayesian models in B1-B8). The eight Bayesian models differ in how this expectation is mapped into a choice, allowing for different degrees of influence of the context, their counterpart behaviour and the player own bias. A third class of models assumes that participants were choosing what to do based not only on the other player behaviour but also on the outcome of each trial, with different assumptions on how winning a trial should change their behaviour in the next (becoming more or less cooperative). This class of models were effectively assuming that the player behaviour would be shaped by the reward collected (Reward models in Fig.3d).

For simplicity, we remapped positions in the space to a cooperation space so that choosing the midpoint (competitive position) would correspond to minimum cooperation while going to the extreme ends of the space (either x=0 or x=1) would correspond to maximum cooperation. Therefore is symmetrical to the midpoint and is defined as

$$theta=left|x-0.5right|/0.5,({{{{{rm{S}}}}}}1-{{{{{rm{S}}}}}}4,, {{{{{rm{B}}}}}}1-{{{{{rm{B}}}}}}8,, {{{{{rm{R}}}}}}1-{{{{{rm{R}}}}}}6)$$

(9)

All models include a precision parameter capturing intrinsic response variability linked to sensory-motor precision of the participant, such that, given each models prediction about the players decision, the actual choice will be normally distributed around that prediction with standard deviation equal to the inverse of the precision parameter, constrained to be in the range (0:10000).

For models S1-S4, we assumed that participants were simply reacting to their counterpart recent choice. Model S1 simply assumed that players would attempt to reciprocate their co-players level of cooperation . As the model operate in a symmetrical cooperation space this implies matching their expected level of cooperation in the opposite hemifield.

$${choice}left(tright) sim N,left(theta left(t-1right){{{{{rm{;}}}}}} , 1/{{{{{rm{Precision}}}}}}right)({{{{{rm{S}}}}}}1)$$

(10)

Model S2 simply assumed that players would attempt to reciprocate their co-players updates in their level of cooperation moving from their previous position plus a fixed SocialBias parameter, capturing their a priori desired level of cooperation, constrained to be in the range (1000:1000).

$${choice}left(tright) sim N,left({{{{{rm{SocialBias}}}}}}+{choice}left(t-1right)+triangle theta (t-1){{{{{rm{;}}}}}} ,1/{{{{{rm{Precision}}}}}}right)({{{{{rm{S}}}}}}2)$$

(11)

Model S3 was identical to model S2 with the only difference of having three different SocialBias parameters, one for each social context. Model S4 simply assumed that players would reciprocate their co-players last level of cooperation scaled by a TitXtat multiplicative parameter, constrained to be in the range (0:2). If this is bigger than 1, a participant would cooperate more than the counterpart.

$${choice}left(tright) sim N,left({{{{{rm{SocialBias}}}}}}+{{{{{rm{TitXTat}}}}}} * theta left(t-1right){{{{{rm{;}}}}}} , 1/{{{{{rm{Precision}}}}}}right)({{{{{rm{S}}}}}}4)$$

(12)

For models B1-B8, we used a Bayesian decision framework that has been shown to explain how humans learn in social contexts very well32,99 for modelling how participants made decisions in the task and how the social context (reward distribution) can modulate these decisions. Our ideal Bayesian learner was assumed to update its expectation about the co-players level of cooperation on a trial by trial basis by observing the position of its counterpart. In our Bayesian framework, knowledge about has two sources: a prior distribution P() on based initially on the social context and thereafter on past experience and a likelihood function P(D) based on the observed position of the counterpart in the last trial. The product of prior and likelihood is the posterior distribution that defines the expectation about the counterparts position in the next trial:

$$Pleft(theta left(t+1right)right)=P(theta (t+1)|D)=frac{left(Pleft(D right|theta left(tright)right) * P(theta (t))}{P(D)},({{{{{rm{B}}}}}}1-{{{{{rm{B}}}}}}8)$$

(13)

According to Bayesian decision theory (Berger, 1985; OReilly et al., 2013), the posterior distribution P(D) captures all the information that the participant has about . In the first trial of a block, when players have no evidence on past position of the co-players, we chose normal priors that correspond to the social context: in the competition context prior=0, in the cooperation context, prior=1, and in the intermediate context where the winner takes all, prior=0.5, whereas in all cases the standard deviation is fixed to prior=0.05 which heuristically speeds up the fit. The likelihood function is also assumed to be a normal distribution centred on the observed location of the co-player with standard deviation fixed to the average variability in positions observed so far in the block (that is, in all trials up to the one in which is estimated). Being the product of two Gaussian distribution the posterior distribution is also Gaussian. All distributions are computed for all values of the linear space at a resolution of d=0.01.

While all Bayesian models assume that players update their expectations about the co-player choices, they differ in how the translate these expectations into their own choices. We built 8 Bayesian models based on increasing level of complexity. In short, all models include a Precision parameter. Model B1 simply assumes that players will aim to reciprocate the expected position of the co-player (coplayer_exp_pos).

$${coplayer}_{exp }_{pos},(t)=Eleft(Pleft(theta (t)right)right)({{{{{rm{B}}}}}}1-{{{{{rm{B}}}}}}8)$$

(14)

$${choice}left(tright) sim N,left({coplayer}_{exp }_{pos},left(tright){{{{{rm{;}}}}}} , 1/{{{{{rm{Precision}}}}}}right)({{{{{rm{B}}}}}}1)$$

(15)

Model B2 assumes that players will aim for a level of cooperation shifted compared to coplayer_exp_pos. Such a shift is captured by the SocialBias parameter which sets an a priori tendency to be more or less cooperative and all further Bayesian models include it.

$${choice}left(tright) sim N,({coplayer}_{exp }_{pos},left(tright)+{{{{{rm{SocialBias;}}}}}} , 1/{{{{{rm{Precision}}}}}}) , ({{{{{rm{B}}}}}}2)$$

(16)

Model B3 further assumes that participants can fluctuate in how much they reciprocate their co-player cooperation. This effect is modelled multiplying coplayer_exp_pos by a TitXTat parameter.

$${choice}left(tright) sim N,({{{{{rm{TitXTat}}}}}} * {coplayer}_{exp }_{pos},left(tright)+{{{{{rm{SocialBias;}}}}}} , 1/{{{{{rm{Precision}}}}}}) , ({{{{{rm{B}}}}}}3)$$

(17)

Model B4 further assumes that players keep track of the target position, updating their expectations after each trial in a similar way as they keep track of the co-player position, with a Bayesian update. They then decide their level of cooperation based on the prediction of Model 3 plus a linear term that depends on the expected position of the target scaled by a TargetBias parameter. As the target was random we did not expect this model to significantly increase the fit compared to Model 3.

$${choice}left(tright) sim N,(T{itXTat} * {coplayer}_{exp }_{pos},left(tright)+{{{{{rm{SocialBias}}}}}} \ +{{{{{rm{TargetBias}}}}}} * left(Pleft({x}_{{target}}right)right){{{{{rm{;}}}}}} , 1/{{{{{rm{Precision}}}}}}) , ({{{{{rm{B}}}}}}4)$$

(18)

Model B5 further assumes that participants modulate how much they are willing to reciprocate their co-player behaviour based on the social risk associated to the context. In this model the TitXtat takes the form of a multiplying TitXTat factor

$${TitXTat; factor}=frac{1}{1+q_{risk} * {social}_{risk}},({{{{{rm{B}}}}}}5)$$

(19)

$${choice}left(tright) sim N({TitXTat; factor} * {coplayer}_{exp }_{pos}left(tright)+{{{{{rm{SocialBias}}}}}} \ +{{{{{rm{TargetBias}}}}}} * left(Pleft({x}_{{target}}right)right){{{{{rm{;}}}}}} , 1/{{{{{rm{Precision}}}}}}) , ({{{{{rm{B}}}}}}5)$$

(20)

Where q_risk is a parameter capturing the sensitivity to the social risk induced by the context, which is proportional to the redistribution parameter :

$${social; risk}=2,alpha -1,({{{{{rm{B}}}}}}5-{{{{{rm{B}}}}}}8)$$

(21)

Model B6, B7 and B8 do not include the target term. They all model the TitXtat factor with two parameters as in

$${TitXTat; factor}=frac{{TitXTat}}{1+{q_risk} * {social_risk}} , left({{{{{rm{B}}}}}}6-{{{{{rm{B}}}}}}8right)$$

(22)

$${choice}left(tright) sim Nleft({{{{{rm{TitXTat; factor}}}}}} * {coplayer}_{exp }_{pos}left(tright){{{{{rm{;}}}}}},1/{{{{{rm{Precision}}}}}}right)({{{{{rm{B}}}}}}6-{{{{{rm{B}}}}}}8)$$

(23)

Model B7 and B8 further assume that participants estimate the probability that their co-player will betray their expectations and behave more competitively than expected. This is computed updating their betrayal expectations after each trial in a Bayesian fashion using the difference between the observed and expected position of the co-player to update a distribution over all possible discrepancies. This produces, for each trial, an expected level of change in the co-player position. Model B7 and B8 both weigh this expected betrayal with a betrayal sensitivity parameter and add this betrayal term either to the social risk, increasing it by an amount proportional to the expected betrayal (model B7) or to the choice prediction, shifting it towards competition by an amount proportional to the expected betrayal (model B8). Model B6 does not include any modelling of the betrayal.

For models R1-R6, we assumed that participants were simply adjusting their position based on the feedback received in the previous trial. Model R1 assumed that after losing, players would become more competitive and after winning, more cooperative. These updates in different directions would be captured by two parameters Shiftwin and Shiftlose both constrained to be in the range (0:10).

$$ch{oice}left(tright) sim N(ch{oice}(t-1)pm {Sh{ift}}_{({win},{lose})}{{{{{rm{;}}}}}} , 1/{Precision}) , ({{{{{rm{R}}}}}}1)$$

(24)

Model R2 assumed that after losing, players would shift their position in the opposite direction than they did in the previous trial, while after winning, they would keep shifting in the same direction. These updates in different directions would be captured by two parameters Shiftwin and Shiftlose both constrained to be in the range (0:10).

$$ch{oice}(t) sim N(ch{oice}(t-1)pm {Sh{ift}}_{left(right.{win},{lose},{sign}(triangle ch{oice}(t-1))}; , 1/{Precision}) , ({{{{{rm{R}}}}}}2)$$

(25)

Model R3 and R4 are similar to model R1 and R2 in how they update the position following winning or losing but now players would also take into account their co-players last level of cooperation scaled by a TitXtat multiplicative parameter and their own a priori tendency to be more or less cooperative captured by a SocialBias parameter.

$$ch{oice}left(tright) sim N({{{{{rm{SocialBias}}}}}}+{{{{{rm{TitXTat}}}}}} * theta left(t-1right)pm {Sh{ift}}_{left({win},{lose}right)}{{{{{rm{;}}}}}} , 1/{Precision}) , ({{{{{rm{R}}}}}}3)$$

(26)

$$choice(t) sim N({{{{{rm{SocialBias}}}}}}+{{{{{rm{TitXTat}}}}}} * theta (t - 1) \ pm {Shift}_{left(right.{win},{lose},{sign}(triangle choice(t - 1))}; , 1/{Precision}) , ({{{{{rm{R}}}}}}4)$$

(27)

Model R5 and R6 are identical to model R1 and R2 with the only difference of fitting each choice using the actual value of the previous choice made by the players rather than its fitted value (to prevent under fitting because of recursive errors).

We fit all models to individual participants data from all three social contexts using custom scripts in MATLAB and the MATLAB function fmincon. Log likelihood was computed for each model by

$${LL}left({model}right)=mathop{sum}limits_{{subjects}}mathop{sum}limits_{t}{LL}({choice}(t))$$

(28)

where

$${LL}({choice}(t))={log }left( sqrt{frac{{Precision}}{2pi }} * {{exp }}left(right.-0.5 * {(({{{{{rm{choice}}}}}}({{{{{rm{t}}}}}})-{{{{{rm{prediction}}}}}}({{{{{rm{t}}}}}})) * {Precision})}^{2}right.$$

(29)

We compared models computing the Bayesian information Criterion

$${BIC}left({model}right)=klog left(nright)-2 * {LL}({model})$$

(30)

where k is the number of parameters for each model and n = number of trials * number of participants.

All Bayesian models significantly outperformed both the simple reactive models and the rewards-based ones. To validate this modelling approach and confirm that players were trying to predict others positions rather than just reciprocating preceding choices, we ran a regressions model to explain participants choices based on both the last position of the co-player and its Bayesian expectation in the following trial (see supplementary figure6b).

The winning model is B6, a Bayesian model that contained features that accounted for both peoples biases towards cooperativeness, how the behaviour of the other player influenced subsequent choices and the influence of the social context. For this model, participants choose where to position themselves in each trial based on (21), (22) and (23).

Precision, SocialBias, TitXTat, q_risk are the four free parameters of the model. Notice that TitXTat is a parameter capturing the context-independent amount of titXtat which is then normalised by the context-dependant social risk.

We assessed the degree to which we could reliably estimate model parameters given our fitting procedure. More specifically, we generated one simulated behavioral data set (i.e., choices for an interacting couple for 60 trials in three different social contexts) using the average parameters estimated originally on the real behavioral data. Additionally we generated five more simulated behavioral data sets using five randomly sampled parameter sets from the range used in the original fit. For each simulated behavioral data set we ran the winning model B6 this time trying to fit the generated data and identify the set of model parameters that maximized the log-likelihood in the same way we did for original behavioral data. To assess the recoverability of our parameters we repeated this procedure 10 times for each simulated data set (i.e., 60 repetitions). The recoverability of the parameters was high in almost all cases as can be seen in Supplementary Fig.6c.

The Bayesian framework allowed us to derive how counterparts position influenced participants initial impressions of the level of cooperation needed in a given context. Given this framework, we measured how much the posterior distribution over the co-player position differs from the prior distribution. We did so by computing, for each trial, the KullbackLeibler divergence (KLD) between the posterior and prior probability distribution over the co-player response. This absolute difference formally represents the degree with which P2 violated P1s expectation and is a trial-by-trial measure of a social prediction error that triggers a change in P1s belief, guiding future decisions. A greater KL divergence indicates a higher cooperation-competition update. We, therefore, estimated a social prediction error signal by computing the surprise each player experienced when observing the co-player position, based on its current expectation. In the following equation, where p and q represent respectively prior and posterior density functions over the co-player position, the KL divergence is given by:

$${KLD}left(p,, qright)=-int pleft(xright)log qleft(xright){dx}+int pleft(xright)log pleft(xright){dx}=int pleft(xright)left(right.log (pleft(xright)-log qleft(xright)){dx}$$

Excerpt from:

Neural implementation of computational mechanisms underlying the continuous trade-off between cooperation and competition - Nature.com

Categories
Co-location

Nov. 14 Building Permits | Business | reflector.com – Daily Reflector

Country

United States of AmericaUS Virgin IslandsUnited States Minor Outlying IslandsCanadaMexico, United Mexican StatesBahamas, Commonwealth of theCuba, Republic ofDominican RepublicHaiti, Republic ofJamaicaAfghanistanAlbania, People's Socialist Republic ofAlgeria, People's Democratic Republic ofAmerican SamoaAndorra, Principality ofAngola, Republic ofAnguillaAntarctica (the territory South of 60 deg S)Antigua and BarbudaArgentina, Argentine RepublicArmeniaArubaAustralia, Commonwealth ofAustria, Republic ofAzerbaijan, Republic ofBahrain, Kingdom ofBangladesh, People's Republic ofBarbadosBelarusBelgium, Kingdom ofBelizeBenin, People's Republic ofBermudaBhutan, Kingdom ofBolivia, Republic ofBosnia and HerzegovinaBotswana, Republic ofBouvet Island (Bouvetoya)Brazil, Federative Republic ofBritish Indian Ocean Territory (Chagos Archipelago)British Virgin IslandsBrunei DarussalamBulgaria, People's Republic ofBurkina FasoBurundi, Republic ofCambodia, Kingdom ofCameroon, United Republic ofCape Verde, Republic ofCayman IslandsCentral African RepublicChad, Republic ofChile, Republic ofChina, People's Republic ofChristmas IslandCocos (Keeling) IslandsColombia, Republic ofComoros, Union of theCongo, Democratic Republic ofCongo, People's Republic ofCook IslandsCosta Rica, Republic ofCote D'Ivoire, Ivory Coast, Republic of theCyprus, Republic ofCzech RepublicDenmark, Kingdom ofDjibouti, Republic ofDominica, Commonwealth ofEcuador, Republic ofEgypt, Arab Republic ofEl Salvador, Republic ofEquatorial Guinea, Republic ofEritreaEstoniaEthiopiaFaeroe IslandsFalkland Islands (Malvinas)Fiji, Republic of the Fiji IslandsFinland, Republic ofFrance, French RepublicFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabon, Gabonese RepublicGambia, Republic of theGeorgiaGermanyGhana, Republic ofGibraltarGreece, Hellenic RepublicGreenlandGrenadaGuadaloupeGuamGuatemala, Republic ofGuinea, RevolutionaryPeople's Rep'c ofGuinea-Bissau, Republic ofGuyana, Republic ofHeard and McDonald IslandsHoly See (Vatican City State)Honduras, Republic ofHong Kong, Special Administrative Region of ChinaHrvatska (Croatia)Hungary, Hungarian People's RepublicIceland, Republic ofIndia, Republic ofIndonesia, Republic ofIran, Islamic Republic ofIraq, Republic ofIrelandIsrael, State ofItaly, Italian RepublicJapanJordan, Hashemite Kingdom ofKazakhstan, Republic ofKenya, Republic ofKiribati, Republic ofKorea, Democratic People's Republic ofKorea, Republic ofKuwait, State ofKyrgyz RepublicLao People's Democratic RepublicLatviaLebanon, Lebanese RepublicLesotho, Kingdom ofLiberia, Republic ofLibyan Arab JamahiriyaLiechtenstein, Principality ofLithuaniaLuxembourg, Grand Duchy ofMacao, Special Administrative Region of ChinaMacedonia, the former Yugoslav Republic ofMadagascar, Republic ofMalawi, Republic ofMalaysiaMaldives, Republic ofMali, Republic ofMalta, Republic ofMarshall IslandsMartiniqueMauritania, Islamic Republic ofMauritiusMayotteMicronesia, Federated States ofMoldova, Republic ofMonaco, Principality ofMongolia, Mongolian People's RepublicMontserratMorocco, Kingdom ofMozambique, People's Republic ofMyanmarNamibiaNauru, Republic ofNepal, Kingdom ofNetherlands AntillesNetherlands, Kingdom of theNew CaledoniaNew ZealandNicaragua, Republic ofNiger, Republic of theNigeria, Federal Republic ofNiue, Republic ofNorfolk IslandNorthern Mariana IslandsNorway, Kingdom ofOman, Sultanate ofPakistan, Islamic Republic ofPalauPalestinian Territory, OccupiedPanama, Republic ofPapua New GuineaParaguay, Republic ofPeru, Republic ofPhilippines, Republic of thePitcairn IslandPoland, Polish People's RepublicPortugal, Portuguese RepublicPuerto RicoQatar, State ofReunionRomania, Socialist Republic ofRussian FederationRwanda, Rwandese RepublicSamoa, Independent State ofSan Marino, Republic ofSao Tome and Principe, Democratic Republic ofSaudi Arabia, Kingdom ofSenegal, Republic ofSerbia and MontenegroSeychelles, Republic ofSierra Leone, Republic ofSingapore, Republic ofSlovakia (Slovak Republic)SloveniaSolomon IslandsSomalia, Somali RepublicSouth Africa, Republic ofSouth Georgia and the South Sandwich IslandsSpain, Spanish StateSri Lanka, Democratic Socialist Republic ofSt. HelenaSt. Kitts and NevisSt. LuciaSt. Pierre and MiquelonSt. Vincent and the GrenadinesSudan, Democratic Republic of theSuriname, Republic ofSvalbard & Jan Mayen IslandsSwaziland, Kingdom ofSweden, Kingdom ofSwitzerland, Swiss ConfederationSyrian Arab RepublicTaiwan, Province of ChinaTajikistanTanzania, United Republic ofThailand, Kingdom ofTimor-Leste, Democratic Republic ofTogo, Togolese RepublicTokelau (Tokelau Islands)Tonga, Kingdom ofTrinidad and Tobago, Republic ofTunisia, Republic ofTurkey, Republic ofTurkmenistanTurks and Caicos IslandsTuvaluUganda, Republic ofUkraineUnited Arab EmiratesUnited Kingdom of Great Britain & N. IrelandUruguay, Eastern Republic ofUzbekistanVanuatuVenezuela, Bolivarian Republic ofViet Nam, Socialist Republic ofWallis and Futuna IslandsWestern SaharaYemenZambia, Republic ofZimbabwe

See the rest here:

Nov. 14 Building Permits | Business | reflector.com - Daily Reflector

Categories
Co-location

Ann Arbor brewery hopes new location will bring creative flair to area – MLive.com

WASHTENAW COUNTY, MI -- An Ann Arbor brewing company is set to pass another hurdle this week as it continues to pursue opening a new campus.

Mothfire Brewing Co., currently at 2290 S. Industrial Highway, has plans to move to a new location at 713 W. Ellsworth Road in Pittsfield Township.

The Pittsfield Township Board of Trustees is scheduled to hear a resolution recommending the approval of a permit for the companys on-site tasting room on Wednesday, Nov. 9. Local approval is required for the state-level permit for an on-premises tasting room. The preliminary site plan was approved in June 2022.

Owner Noah Kaplan said he hopes the new location will bring a creative flair to the area, with hopes of making it a makers corridor. He also owns the neighboring Leon Speakers.

We have a big vision for the campus, Kaplan said.

The 6,000-square-foot location will have a patio and tasting room, as well a customer-facing really beautiful brewing system, Kaplan said.

The brewery is currently slated to open in March 2023.

But we are ready to be pouring, Kaplan said.

Formerly known as Pileated Brewing Co., Mothfire debuted in its current space in 2020 in what Kaplan called a thousand-day experiment. The Mothfire owners bought the business and brewhouse from Pileated Brewing Co. in 2019.

We created over 40 brands of beers there. We literally brewed hundreds of times, Kaplan said. We really worked out that location to be a test kitchen and kind of a trial so that we can really hone in on the quality of beer, figure out what people loved around here.

The new location will serve both beers and non-alcoholic cocktails in a space featuring 16 taplines, a dramatic faade and a central fireplace, Kaplan said. Rather than continuing the moodier feel of its current tasting room, the new Mothfire location will be bright and high contrast.

The Ellsworth Road location, which has parking for 80 people, will also be able to serve more people and be a step forward for the company, Kaplan said.

The vision we always have is about evolution from a moth standpoint. We definitely think about the stages of the moth of where we are, he said. That stage is kind of a cocoon, and were about to get a flight and take wings.

Mothfire Brewing Co. plans to open its new location at 713 W. Ellsworth Road in March 2023. Although hours are not yet finalized, the brewery currently plans to be open from 5 to 10 p.m. Tuesday through Thursday, 4 to 11 p.m. on Friday, noon to 10 p.m. on Saturday and noon to 6 p.m. on Sunday.

Find the brewing company online, on social media or by phone at 734-369-6290.

Read more from The Ann Arbor News:

New Ann Arbor-area brewery clears first step toward opening

Election results for the Nov. 8 general election in Ann Arbor, Washtenaw County

Ann Arbor voters approve climate-action tax proposal with 71% support

Visit link:

Ann Arbor brewery hopes new location will bring creative flair to area - MLive.com