Categories
Co-location

First combined police and fire stations for Gloucestershire announced – the two locations – Gloucestershire Live

Police officers will be based at two fire stations in Gloucestershire in a bid to improve working between emergency services. It means Newent and Winchcombe fire stations will see police officers starting and finishing their shifts at the new hubs.

Policing teams would go out on patrol from the bases whether that is on foot or in vehicles, returning for meal breaks. The idea is to tackle issues such as anti-social behaviour and road safety in the local communties.

Teams could also use the building for pre-arranged appointments or meetings with the public and other agencies. They can also serve as bases for local Special Constables and Volunteer Police Community Support Officers which the Police and Crime Commissioner and Constabulary are working to expand.

Read more: Cheltenham Remembrance Day 2022 - Key times and events

Deputy Police and Crime Commissioner (DPCC) Nick Evans, made the announcement at Gloucestershire County Councils Fire and Rescue Scrutiny Committee on Friday. He said: This is a huge step forward in working together with our colleagues in the County Council and GFRS to make our county Safer.

Our Police and Crime Commissioner Chris Nelson promised to expand the visibility and presence of police in more of our communities, particularly rural areas, by making better use of publically-owned buildings, and collaborating with the fire service. This shows we mean business and are delivering on that promise.

Work has already begun to look at other areas where it makes operational sense for similar collaborations to take place, adding to the existing footprint of police buildings. With regular and volunteer officers working from these new stations, there will be a very visible operational impact that will be felt by our communities.

Cllr Dave Norman, cabinet member with responsibility for the Fire and Rescue Service, said: I fully support collaboration between our fire and rescue service and police partners. I am pleased to see that the feasibility has been scrutinised appropriately and that Newent and Winchcombe Community Fire Stations are able to provide a base for the police officers to operate out of collaboratively whilst continuing to provide valuable community services.

The co-location will be at no additional cost to Gloucestershire County Council. It is important we maintain high quality, accessible services whilst ensuring value for money for the residents of Newent and Winchcombe.

Mark Preece, chief fire officer at Gloucestershire Fire and Rescue Service, said: We have a very close working relationship with Gloucestershire Police and are extremely pleased that they will move in to our Community Fire Stations in Newent and Winchcombe with us. We looked at six of our stations across the county with both Newent and Winchcombes feasibility to co-locate successful. We currently have the Ambulance Service based at some of our community fire stations and this is a further step in our commitment to blue light collaboration.

We will always seek out opportunities to collaborate with other Blue Light Services to provide the best possible service to our communities.

READ NEXT:

Visit link:

First combined police and fire stations for Gloucestershire announced - the two locations - Gloucestershire Live

Categories
Co-location

Influenza A virus reassortment in mammals gives rise to genetically distinct within-host subpopulations – Nature.com

Ethical considerations

All the animal experiments were conducted in accordance with the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The studies were conducted under animal biosafety level 2 containment and approved by the IACUC of Emory University (DAR-2002738-ELMNTS-A) for guinea pig (Cavia porcellus), the IACUC of the University of Georgia (AUP A2015 06-026-Y3-A5) for ferret (Mustela putorius furo) and the IACUC of Kansas State University (protocol #4120) for swine (Sus scrofa). The animals were humanely euthanized following guidelines approved by the American Veterinary Medical Association.

MadinDarby canine kidney (MDCK) cells, a gift from Dr. Robert Webster, St Jude Childrens Research Hospital, Memphis, TN to D.R.P were used for all experiments. A seed stock of MDCK cells at passage 23 was subsequently amplified and maintained in Minimal Essential Medium (Gibco) supplemented with 10% fetal bovine serum (FBS; Atlanta Biologicals) and Normocin (Invivogen). 293T cells (ATCC, CRL-3216) were maintained in Dulbeccos Minimal Essential Medium (Gibco) supplemented with 10% FBS and PS. All cells were cultured at 37C and 5% CO2 in a humidified incubator. The cell lines were not authenticated. All cell lines were tested monthly for mycoplasma contamination while in use. The medium for the culture of IAV in MDCK cells (virus medium) was prepared by supplementing the basal medium for the relevant cell type with 4.3% BSA and Normocin.

Viruses used in this study were derived from influenza A/Netherlands/602/2009 (H1N1) virus (NL09) and were generated by reverse genetics51,52,53. In brief, 293T cells transfected with reverse genetics plasmids 1624h previously were co-cultured with MDCK cells at 37C for 4048h. Recovered virus was propagated in MDCK cells at a low multiplicity of infection to generate working stocks. Titration of stocks and experimental samples was carried out by plaque assay in MDCK cells. Silent mutations were introduced into each segment of the VAR virus by site-directed mutagenesis of reverse genetics plasmids. The specific changes introduced into the VAR virus were reported previously24,31. NL09 VAR virus was engineered to contain a 6XHIS epitope tag plus a GGGS linker at the amino (N) terminus of the HA protein following the signal peptide. NL09 WT virus carries an HA epitope tag plus a GGGS linker inserted at the N terminus of the HA protein31. For animal challenges, 1:1 mixture of NL09 WT and VAR viruses was prepared using methods described previously24. This mixture was validated in cell culture by quantifying cells positive for HIS and HA tags following infection of MDCK cells, revealing an empirically determined ratio of 0.95:1 (WT:VAR). The same mixture was used for all experiments reported herein.

Replication of NL09, NL09 WT, and NL09 VAR viruses was determined in triplicate culture wells. MDCK cells in 6 well dishes were inoculated at an MOI of 0.05 PFU/cell in PBS. After 1h incubation at 37C, inoculum was removed, cells were washed 3x with PBS, 2mL virus medium was added to cells, and dishes were returned to 37C. A 120 ul volume of culture medium was sampled at the indicated times points and stored at 80C. Viral titers were determined by plaque assay on MDCK cells.

Female, Hartley strain guinea pigs weighing 250350g were obtained from Charles River Laboratories and housed by Emory University Department of Animal Resources. Before intranasal inoculation and nasal washing, the guinea pigs were anaesthetized with 30mgkg1 ketamine and 4mgkg1 xylazine by intramuscular injection. The GPID50 of the NL09 virus was previously determined to be 1101 PFU32. To evaluate reassortment kinetics in guinea pigs, groups of six animals were infected with 1103 PFU (1102 ID50) or 1106 PFU (1105 ID50) of the NL09 WT/VAR virus mixtures. Virus inoculum was given intranasally in a 300l volume of PBS. Nasal washes were performed on days 16 post-inoculation and titered by plaque assay. Viral genotyping was performed on samples collected on days 1, 2, and 3 or 4 for each guinea pig. Day 3 was used for animals receiving the higher dose since the virus is cleared rapidly in this system and shedding has ceased by day 4.

Female ferrets, 20-weeks-old, from Triple F Farms (Gillett, PA) were used. All ferrets were seronegative by anti-nucleoprotein (anti-NP) influenza virus enzyme-linked immunosorbent assay, Swine Influenza Virus Ab Test, (IDEXX, Westbrook, ME) prior to infection. Five days prior to experimentation, ferrets were sedated, and a subcutaneous transponder (Bio Medic Data Systems, Seaford, Delaware) was implanted to identify each animal and provide temperature readings. Anesthetics were applied via intramuscular injection with ketamine (20mgkg1) and xylazine (1mgkg1). Infections were performed via intranasal inoculation of 1mL of virus diluted in PBS. Ferret nasal washes were carried out as follows. Ferrets were anesthetized and 1ml of PBS administered to the nose was used to induce sneezing. Expelled fluid was collected into Petri dishes and samples were collected in an additional volume of 1mL PBS. Infected ferrets were monitored daily for clinical signs, temperature, and weight loss. Ferrets were euthanized by intravenous injection of 1ml of Beuthanasia-D diluted 1:1 with DI water (Merck, Madison, NJ).

For determination of ferret ID50, six groups of four ferrets each were inoculated with increasing doses of the NL09 WT/VAR virus mixture (1100.1 PFU, 1100 PFU, 1101 PFU, 1102 PFU, 1103 PFU, and 1104 PFU). Nasal washes were collected daily for up to 6 days and titrated for viral shedding by plaque assay. The ferret ID50 was determined based on results obtained on day 2 and found to be equivalent to 3.2102 PFU.

For analysis of reassortment frequency and detection of viral antigen in tissues, ferrets were inoculated with 3.2104 PFU (1102 ID50) or 3.2107 PFU (1105 ID50). After infections, nasal washes were collected daily for up to 6 days and titrated by plaque assay. Viral genotyping was performed on samples collected on days 1, 3, and 5 for each ferret. Necropsies were performed on days 14 for the collection of nasal turbinate and lung tissues. A single lung lobe (the left caudal lobe) was sampled from each ferret. Tissue sections collected for virology were disrupted in 1mL of sterile PBS using the TissueLyser LT (Qiagen, Germantown, MD) at 30Hz for 5min twice, in microcentrifuge tubes with 3mm Tungsten Carbide Beads (Qiagen, St. Louis, MO). Supernatants were clarified by centrifugation and frozen at 80C until viral titration. For histology, tissues were submerged in 10% buffered formalin (Sigma Aldrich, St. Louis, MO) and stored at room temperature until evaluation.

The pig study was conducted at the Large Animal Research Center (a biosafety level 2+ facility) at Kansas State University in accordance with the Guide for the Care and Use of Agricultural Animals in Research and Teaching of the U.S. Department of Agriculture. To determine virus reassortment and viral antigen in tissues, 18 4-week-old influenza H1 and H3 subtype virus- and porcine reproductive and respiratory syndrome virus-seronegative gender-mixed crossbred pigs were randomly allocated into groups. Each pig was inoculated with 2106 PFU of NL09 WT/VAR mixture through both intranasal and intratracheal routes (106 PFU was administered in a 1ml volume by each of these two routes) under anesthesia as described previously54. Clinical signs for all experimental pigs were monitored daily throughout the experiment. Nasal swabs were collected at 1-, 3-, 5-, and 7-days post infection from each pig. Three infected pigs were euthanized at 3-, 5-, and 7-days post infection. During necropsy, nasal turbinate, trachea, and lung tissues from seven lobes collected from each pig were frozen at 80C for virus isolation and fixed in 10% buffered formalin for IHC examination.

Reassortment frequencies were evaluated by genotyping 21 clonal viral isolates per sample as described previously50. This analysis was applied to guinea pig nasal washes, ferret nasal washes, swine nasal swabs, ferret tissue homogenates, and swine tissue homogenates. Time points to be examined were chosen based on positivity in all animals in a treatment group. Thus, nasal wash samples from days 1, 2, 3, or 4 were evaluated from guinea pigs while samples from days 1, 3, and 5 were evaluated for swine and ferrets. Ferret tissues collected on days 1, 2, 3, and 4 and swine tissues collected on days 3 and 5 were analyzed.

Briefly, plaque assays were performed on MDCK cells in 10cm dishes to isolate virus clones. Serological pipettes (1ml) were used to collect agar plugs into 160l PBS. Using a ZR-96 viral RNA kit (Zymo), RNA was extracted from the agar plugs and eluted in 40l nuclease-free water (Invitrogen). Reverse transcription was performed using Maxima reverse transcriptase (RT; ThermoFisher) according to the manufacturers protocol. The resulting cDNA was diluted 1:4 in nuclease-free water and each cDNA was combined with segment-specific primers (Supplementary Data3)24,31 designed to amplify a region of approximately 100 base pairs. The amplicon for each segment contains the site of the single nucleotide change in the VAR virus. Quantitative PCR was performed with Precision Melt Supermix (Bio-Rad) using a CFX384 Touch Real-Time PCR Detection System (Bio-Rad). Quantitative PCR data was collected using CFX Manager Software v2.1 (Bio-Rad). Template amplification was followed by high-resolution melt analysis to differentiate the WT and VAR amplicons55. Precision Melt Analysis software v1.2 (Bio-Rad) was used to determine the parental origin of each gene segment based on the melting properties of the cDNA amplicons relative to WT and VAR controls. Each plaque was assigned a genotype based on the combination of WT and VAR genome segments, with two variants on each of eight segments allowing for 256 potential genotypes.

Tissue samples from nasal turbinates of ferrets, the right caudal lung lobe of ferrets, and all seven lung lobes of swine were fixed in 10% neutral buffered formalin for at least 24h before being embedded in paraffin. Nasal turbinates were decalcified prior to being embedded in paraffin. Sections from all the tissues were cut and slides were prepared. The tissues were deparaffinized by warming the slides at 60C on a slide warmer for 45min followed by immersion in xylenes (Sigma) for 25min. The slides were then immersed in 100% ethanol for 10min, 95% ethanol for 10min, and 70% ethanol for 5min. The slides were then washed by placing them in deionized water for 1h. Antigen retrieval was performed by steaming the slides in 10mM citric acid, pH 6.0 for 45min, followed by washing in tap water and 1 PBS (Corning) for 5min. The WT and VAR viruses were detected in the tissues using a mouse anti-HA Alexa Fluor 488 (Invitrogen catalog number A-21287; clone 16B12; 1:50 dilution) and mouse anti His Alexa Fluor 555 (Invitrogen catalog number MA1-135-A555; clone 4E3D10H2/E3; 1:50 dilution) while epithelial cell borders were stained using rabbit anti-Na+K+ ATPase Alexa Fluor 647 (Abcam catalog number 198367; clone EP1845Y; 1:100 dilution) at 4C overnight. Slides were washed three times in 1 PBS (Corning) and once in deionized water to remove excess antibody. The slides were mounted onto glass coverslips using ProLong Diamond Anti Fade mounting media (ThermoFisher). The images were acquired using an Olympus FV1000 Confocal Microscope at 60 magnification under an oil immersion objective. The specificity of the antibodies was confirmed by infecting MDCK cells with either the NL09 WT, NL09 VAR, or both viruses for 24h. The cells were fixed using 4% paraformaldehyde (Alfa Aesar) and stained for HA and His tags using the antibodies as described above (Supplementary Fig.9).

For morphological analysis via IHC, the slides were pre-treated in pH 9.0 buffer at 110C for 15min. Blocking was performed using hydrogen peroxide for 20min followed by PowerBlock (BioGenex) for 5min. Slides were washed with PBS thrice and NP antigen was detected using a goat anti-influenza NP polyclonal antibody (abcam catalog number ab155877; 1:1000 dilution) for 1h. Slides were washed thrice with PBS to remove excess antibody and incubated with a rabbit anti-goat biotinylated IgG (Vector laboratories catalog number BA-5000; 1:5000 dilution) for 10min. After washing, 4Plus Alkaline Phosphatase Label (BioCare Medical) was added for 10min. The antigen signal was detected by incubating the slides in Chromogen IP Warp Red stain (BioCare Medical) for 10min. Haematoxylin counterstaining was performed post-antigen staining.

Figures were generated using Python 3 v3.1056 and the packages matplotlib v3.6.057, NumPy v1.23.358, pandas v1.5.059, and seaborn v0.12.060. Simulations were conducted in Python 3 v3.10.

Here a viral genotype is defined as a unique combination of the eight IAV segments, where each segment is derived from either the variant or wild-type parental virus; therefore, there are 28 possible unique genotypes, with two parental genotypes and 254 reassortant genotypes. For any given sample, the frequency of each unique genotype can be calculated by dividing the number of appearances each unique genotype has in the sample by the total number of clonal isolates obtained for that sample.

Understanding the distribution of unique genotypes involves using both unweighted and weighted genotype frequency statistics. Genotype richness ((S)) does not incorporate genotype frequency and is given by the number of unique genotypes in a sample. Given our sample size of 21 plaque isolates, genotype richness, or the number of distinct genotypes detected in a sample, can range from a minimum of 1 (a single genotype is detected 21 times) to a maximum of 21 (21 unique genotypes detected).

Diversity was measured using the ShannonWeiner index (H), which considers both richness and evenness in the frequency with which genotypes are detected. In our dataset, diversity can range from 0 to 3.04. ShannonWiener diversity was calculated as:

$$H=-mathop{sum }limits_{i=1}^{S}({p}_{i} * {ln}{p}_{i})$$

(1)

where (S) is genotype richness and ({p}_{i}) is the frequency of unique genotype (i) in the sample (6).

To address whether evaluating 21 plaques per sample for this analysis was sufficient to yield robust results on genotype diversity, we used a computational simulation to test the sensitivity of the measured diversity values to the number of plaques sampled. In these simulations, we calculated the diversity present in samples generated by randomly picking n (out of the possible 21) plaques without replacement. At each sampling effort n, we simulated 1000 samples, with plaque replacement between samples. The results typically show that diversity values increase as n increases, with values asymptoting as n approaches 21, suggesting that further increases in n would not greatly change results and validating the use of 21 plaques (Supplementary Fig.10).

To evaluate the extent to which the spatial dynamics of viral reassortment and propagation shape the overall richness and diversity in a host, we sought to compare the observed richness and diversity at each anatomical site to that which would be expected if virus moves freely among anatomical locations. Thus, to simulate free mixing within the host, we randomly shuffled observed viral genotypes among all sites in a given animal. The average richness and ShannonWiener index of the simulated viral populations at each site were then calculated. The 5th and 95th percentiles for the simulated distribution of each animal were calculated and compared to the observed richness and diversity for each of the anatomical sites. If a sites observed richness and diversity fell below the 5th percentile or above the 95th percentile, then a barrier to the influx or efflux of reassortant genotypes from or to the other sites is suggested.

The dissimilarity between populations can be measured by beta diversity. For this study, we evaluated beta diversity from a richness perspective, focusing on dissimilarity in the unique genotypes detected and excluding consideration of their frequency. This approach was used to de-emphasize the effects of WT and VAR parental genotypes, which were likely seeded into all anatomical locations at the time of inoculation. We calculate the beta diversity by treating the viral genotypes in two lobes as two distinct populations:

$$beta=frac{{S}_{1+2}}{frac{1}{2}({S}_{1}+{S}_{2})}$$

(2)

where ({S}_{1+2}) is the richness of a hypothetical population composed of pooling the viral genotypes of the two lobes while (frac{1}{2}({S}_{1}+{S}_{2})) represents the mean richness of the lobes (7). The beta diversity of a single comparison can be normalized so that it ranges from zero to one:

$${{BD}}^{{prime} }=frac{{BD}-1}{{{BD}}_{{max }}-1}{beta }_{n}$$

(3)

where ({{BD}}_{{max }}) is the beta diversity calculated by assuming that there are no viral genotypes shared by both lobes (7). A ({{BD}}^{{prime} }{beta }_{n}) closer to one indicates that the lobes viral populations are more dissimilar while a ({{BD}}^{{prime} }{beta }_{n}) closer to zero suggests that the lobes have similar unique viral genotypes and overall viral richness. A ({{BD}}^{{prime} }) ({beta }_{n})of zero occurs when all unique genotypes present in one lobe are also present in the other.

To address whether evaluating 21 plaques per sample for this analysis was sufficient to yield robust results on beta diversity, we again used computational simulations. These simulations were designed to test the sensitivity of beta diversity values to the number of plaques sampled. In these simulations, we again generated plaque data subsets by randomly picking n (out of the possible 21) plaques without replacement. At each sampling effort n, we simulated 1000 samples, with plaque replacement between samples. Beta diversity values were then calculated based on these data subsets, at a given n. The results typically show that ({beta }_{n})values tend to stabilize as n approaches 21, suggesting that further increases in n would not greatly change results and validating the use of 21 plaques (Supplementary Fig.10). In a subset of cases that involve the nasal sample of Pig 5, however, the relationship between ({beta }_{n}) and n is less stable. In sharp contrast to most other samples from Pig 5, the nasal site showed 20WT parental isolates and one VAR parental isolate. The lung tissues had no WT parental genotypes detected. As a result, each successive plaque draw from the nasal sample increases the probability of detecting the VAR virus and therefore detecting a commonality between the nasal tract and any of the lung lobes. Thus, in situations where two tissue sites have a single, relatively rare genotype in common, the number of plaques sampled has a strong impact on ({beta }_{n}) outcomes.

To simulate free mixing between two lobes, we randomly shuffled the genotypes between each of the 28 pairwise combinations among pig tissues and the single ferret lung-NT combination and computed the ({{BD}}^{{prime} }{beta }_{n}) for each comparison. Free mixing for all combinations was simulated 1000 times. We reasoned that if compartmentalization was present in the observed dataset, then the dissimilarity values would fall at the high end of the simulated distribution (>95th percentile).

Percentiles were calculated using the percentileofscore method from the SciPy package v1.9.159. Paired t tests and ANOVA tests were performed using the ttest_rel method and the f_oneway method respectively from the SciPy Package v1.9.159.

Further information on research design is available in theNature Portfolio Reporting Summary linked to this article.

Read more here:

Influenza A virus reassortment in mammals gives rise to genetically distinct within-host subpopulations - Nature.com

Categories
Co-location

Barclays slashes staff in UK equities business – The TRADE – The TRADE News

Barclays Investment Bank has made a number of job cuts to its London equities team this week, according to several people familiar with the matter.Matthew Rogers, managing director and EMEA head of high-touch sales trading, has left the bank; as has Neil McKay, head of European event-driven trading. William Fu, an equities trader, has also left, as has Max Tilley, a director in e-trading.

Robin Wiseman, head of quantamental data science, is also believed to have departed, according to confidential sources. One of the departures confirmed to The TRADE, under condition of anonymity, that the job cuts took place with immediate effect yesterday, 7 November.

Barclays declined to comment. Rogers, Wu and Wiseman didnt immediately respond to a request for comment, McKay and Tilley declined to comment.

The cuts represent a modest headcount reduction and do not signal a change in strategy for the bank, another person said. Instead, the goal is to dynamically refocus the businesson areas with the highest opportunity.

The job cuts are in line with similar moves made by competitors, several of which have reduced headcount due to market conditions. Barclays increased headcount by 5% in 2022 across the corporate and investment bank, added the source.Barclays is not alone in trimming headcount. It was reported last month that Goldman Sachs is planning to implement a round of job cuts that could result in hundreds of dismissals. Citi this week also slashed staff, cutting dozens of jobs across its investment banking division. Wall Street rival Morgan Stanley is also expected to start a fresh round of job cuts globally in the coming weeks, according to a report from Reuters.

The moves come ahead of the annual bonus season.Barclays announced third-quarter results last month and posted a profit on strong bond trading revenue. The lender posted net profit of 1.5 billion, beating analyst expectations. However, the bank was also last month hit with a $2 million fine by the Financial Industry Regulatory Authority (FINRA) for best execution violations.

More:

Barclays slashes staff in UK equities business - The TRADE - The TRADE News

Categories
Co-location

Lennar, Icon reveal site of 3D-printed neighborhood near Austin – The Business Journals

Lennar, Icon reveal site of 3D-printed neighborhood near Austin  The Business Journals

See the original post:

Lennar, Icon reveal site of 3D-printed neighborhood near Austin - The Business Journals

Categories
Cloud Hosting

Interior Department Seeks Proposals for $1B Cloud Hosting Solutions III Contract – Executive Gov

The Department of the Interior has initiated bid-seeking for a potential 11-year, $1 billion single-award indefinite-delivery/indefinite-delivery contract covering data consolidation and cloud migration services.

A notice posted Monday on SAM.gov states that the Cloud Hosting Solutions III contract will help DOI transition to a single virtual private center that will support requirements for cloud and managed services.

The CHS III contract will support the implementation and maintenance of a virtual private cloud, enforce policies within the VPC environment and provide various managed services optimized to operate in the single hybrid cloud environment.

The selected enterprise cloud services broker will manage a portfolio of cloud computing, storage and application services across multiple vendor offerings, the final request for proposals document states.

The IDIQ has a five-year base period of performance and three two-year option periods.

Responses are due Dec. 19.

Read more from the original source:

Interior Department Seeks Proposals for $1B Cloud Hosting Solutions III Contract - Executive Gov

Categories
Cloud Hosting

Healthcare Cloud Computing Market to Reach USD 157.75 Billion by 2030; Widespread Use of Wearable Technology, Big Data Analytics & IoT in The…

The Brainy Insights

Top companies like Iron Mountain Inc., Athenahealth Inc., Dell Inc. & IBM Corp. have introduced software and services that allow for the collection and assimilation of enormous amounts of healthcare data that are beneficial for the development of the market. North America emerged as the largest market for the global healthcare cloud computing market, with a 34.02% share of the market revenue in 2022.

Newark, Nov. 11, 2022 (GLOBE NEWSWIRE) -- Healthcare cloud computing market size from USD 38.55 billion to USD 157.75 billion in 8 years: The Evolution from Niche Topic to High ROI Application

Upcoming Opportunities

Brainy Insights estimates that the USD 38.55 billion in 2022healthcare cloud computing market will reach USD 157.75 billion by 2030. In just eight years, healthcare cloud computing has moved from an uncertain, standalone niche use case to a fast-growing, high return on investment (ROI) application that delivers user value. These developments indicate the power of the Internet of Things (IoT) and artificial intelligence (AI), and the market is still in its infancy.

Key Insight of the Healthcare Cloud Computing Market

Europe to account for the fastest CAGR of 22.74% during the forecast period

Europe is expected to have the fastest CAGR of 22.74% in the healthcare cloud computing market. Key factors favouring the growth of the healthcare cloud computing market in Europe are that more people are becoming aware of the accessibility of better cloud computing solutions for the healthcare industry. Additionally, the number of hospital admissions is rising due to the growing older population, who are more susceptible to several ailments. The combined effect of all these factors is considered favourable for the demand for the healthcare cloud computing market in Europe.

However, in 2022, the healthcare cloud computing market was dominated by North America. Due to the widespread use of healthcare IT services and ongoing financial and legislative backing from government organisations, the US is a global leader in the healthcare cloud computing business. The Health Information Technology for Economic and Clinical Health Act's (HITECH Act) implementation sped and accelerated the deployment of EHRs and related technologies nationwide. The terms of the Act provide that up to a specific time; healthcare providers will be given financial incentives for showing meaningful use of EHRs. Still, beyond that point, fines may be assessed for failing to justify such usage. Furthermore, in May 2020, Microsoft revealed a cloud service designed exclusively for the healthcare industry to serve doctors and patients better. Healthcare organisations may use this industry-specific cloud service to organise telehealth appointments, manage patient data, and comply with the Health Insurance Portability and Accountability Act (HIPAA).

Story continues

Get more insights from the 230-page market research report @ https://www.thebrainyinsights.com/enquiry/sample-request/13004

The private segment accounted for the largest market share of 39.81% in 2022

In 2022, the private segment held a significant 39.81% market share, dominating the market. It is essential to securely preserve very sensitive patient data to avoid a data privacy breach that might result in legal ramifications. Many reasons have contributed to the growth of the private cloud market, including rising acceptability due to its improved security and an increasing adoption rate compared to public clouds and hybrid clouds.

Healthcare payers account for the fastest CAGR of 23.06% during the forecast period.

Over the forecasted period, the segment of healthcare payers is predicted to increase at the highest CAGR of 23.06%. Insurance companies, organisations that sponsor health plans (such as unions and employers), and third parties make up the healthcare payers. Payers are quickly using cloud computing solutions for safe data collecting and storage, resolving insurance claims, evaluating risks, and preventing fraud. Payers have traditionally struggled to manage high-risk patient groups and high usage. Payers are implementing these cutting-edge technical methods and remedies to reduce escalating healthcare costs. Additionally, cloud computing supports payers in corporate growth, service enhancement, quality improvement, and cost reduction.

Advancement in market

To increase patient engagement, foster teamwork among healthcare professionals, and enhance clinical and operational insights, Microsoft introduced its Microsoft Cloud for Healthcare suite in November 2020.

The collaboration between CVS Health and Microsoft to advance digital healthcare using AI and cloud computing began in December 2021.

To support research and innovation, the life sciences software business, MetaCell, unveiled a new product in September 2021 called "MetaCell Cloud Hosting," which offers cutting-edge cloud computing solutions mainly created for life science and healthcare enterprises.

Market Dynamics

Driver: Advancements in the technology

Because of recent technological developments and enhanced security, many healthcare institutions use the cloud's advantages more than ever. Due to technological developments like telemedicine, remote monitoring, and natural language processing APIs, cloud technology will continue to grow in the upcoming years to better suit specific digital health settings in several important ways. Several healthcare organisations aspire to develop these cloud computing solutions by combining cutting-edge technologies. Instead of gathering data and sending it to the cloud, the system analyses and processes the data right where it is being collected. Due to the adoption of suitable regulatory measures and the development of high-speed internet is also anticipated that the global healthcare cloud computing market will grow. The increasing availability of high-speed internet worldwide is one of the significant factors fuelling the growth of the healthcare cloud computing market.

Restraint: Technological concerns related to the data

The global market for healthcare cloud computing is being restricted by issues about data privacy, obstacles to data transfer, and a rise in cloud data breaches. The lack of skilled IT workers has also restrained the adoption of this technology. Competent professionals are in great demand because of the challenge of finding expertise in HIPAA. The skill gap is expected to slow the shift to cloud computing platforms.

Opportunity: Increasing adoption of data analytics in the healthcare sector

The increasing adoption of wearable technologies, big data analytics, and the Internet of Things (IoT) in the healthcare sector, as well as the introduction of new payment methods and the affordability of the cloud, will boost the market growth. The market is also being affected by the increase in technology usage due to its many advantages, such as the flexibility, enhanced data storage, and scalability offered by cloud computing, as well as the accessibility of flexible medical benefit plan designs.

Challenge: Stringent regulations regarding data

Industry growth is anticipated to be hampered by the complex regulations governing cloud data centres, data security and privacy issues, among other factors. The market for healthcare cloud computing is anticipated to experience challenges throughout the forecast period due to provider rental rules, worries about interoperability and portability, and growing internet dependency among users.

Custom Requirements can be requested for this report @ https://www.thebrainyinsights.com/enquiry/request-customization/13004

Some of the major players operating in the healthcare cloud computing market are:

Allscripts Healthcare Solution Inc. Iron Mountain Inc. Athenahealth Inc. Dell Inc. IBM Corp. Oracle Corp. Cisco Systems Inc. Qualcomm Inc. VMware Inc. Microsoft Corp. EMC Corp. GNAX Health

Key Segments cover in the market:

By Cloud Deployment:

Private Public Hybrid

By End User:

Healthcare Payers Healthcare Providers

By Region

North America (U.S., Canada, Mexico) Europe (Germany, France, U.K., Italy, Spain, Rest of Europe) Asia-Pacific (China, Japan, India, Rest of APAC) South America (Brazil and the Rest of South America) The Middle East and Africa (UAE, South Africa, Rest of MEA)

Have a question? Speak to Research Analyst @ https://www.thebrainyinsights.com/enquiry/speak-to-analyst/13004

About the report:

The market is analyzed based on value (USD Billion). All the segments have been analyzed worldwide, regional, and country basis. The study includes the analysis of more than 30 countries for each part. The report analyzes driving factors, opportunities, restraints, and challenges for gaining critical insight into the market. The study includes porter's five forces model, attractiveness analysis, Product analysis, supply, and demand analysis, competitor position grid analysis, distribution, and marketing channels analysis.

About The Brainy Insights:

The Brainy Insights is a market research company, aimed at providing actionable insights through data analytics to companies to improve their business acumen. We have a robust forecasting and estimation model to meet the clients' objectives of high-quality output within a short span of time. We provide both customized (clients' specific) and syndicate reports. Our repository of syndicate reports is diverse across all the categories and sub-categories across domains. Our customized solutions are tailored to meet the clients' requirement whether they are looking to expand or planning to launch a new product in the global market.

Contact Us

Avinash DHead of Business DevelopmentPhone: +1-315-215-1633Email: sales@thebrainyinsights.comWeb: http://www.thebrainyinsights.com

Excerpt from:

Healthcare Cloud Computing Market to Reach USD 157.75 Billion by 2030; Widespread Use of Wearable Technology, Big Data Analytics & IoT in The...

Categories
Cloud Hosting

Akamai invests in Macrometa as the two strike partnership – TechCrunch

Edge computing cloud and global data network Macrometa has struck a new partnership and product integrations with Akamai Technologies. Akamai also led a new funding round in Macrometa that included participation from Shasta Ventures and 60 Degree Capital. Akamai Labs CTO Andy Champagne will join Macrometas board.

Macrometa founder and CEO Chetan Venkatesh told TechCrunch that its GDN enables cloud developers to run backend services closer to mobile phones, browsers, smart appliances, connected cars and users in edge regions, or points of presence (PoP). That reduces outages because if one edge region goes down, another one can take over instantly. Akamais edge network, meanwhile, covers 4,200 regions around the world.

The partnership between Macrometa and Akamai means the two are combining three infrastructure pieces into one platform for cloud developers: Akamais edge network, cloud hosting service Linode (which Akamai bought earlier this year) and Macrometas Global Data Network (GDN) and edge cloud. Akamai Edge Workers tech is now available through Macrometas GDN console, API and SDK, so developers can build a cloud app or API in Macrometa, and then quickly deploy it to Akamais edge locations.

Venkatesh gave some examples of how clients can use the integration between Macrometa and Akamai.

For SaaS customers, the integration means they can see speed increases and latency improvements of between 25x to 100x for their products, resulting in less user churn and better conversion rates for freemium models. Enterprise customers using the joint solution can improve the performance of streaming data pipelines and real-time data analytics. They also can deal with data residency and sovereignty issues by vaulting and tokenizing data in geo-fenced data vaults for compliance.

Video streaming clients, meanwhile, can use the integration to move their platforms to the edge, including authentication, content catalog rendering, personalization and content recommendations. Likewise, gaming companies can move servers closer to players and use the Akamai-Macrometa integration for features like player matching, leaderboards, multiplayer game lobbies and anti-cheating features. For e-commerce players competing against Amazon, the joint solution can be used to connect and stream data from local stores and fulfillment centers, enabling faster delivery times.

Macrometa will use the funding for developer education, community development, enterprise event marketing and joint customer sales with Akamai (Macrometas products are now available through Akamais sales team).

In a statement about the funding and partnership, Akamai EVP and CTO Robert Blumofe said, Developers are fundamentally changing the way they build, deploy and run enterprise applications. Velocity and scale are more important than ever, while flexibility in where to place workloads is now paramount. By partnering with and investing in Macrometa, Akamai is helping to form and foster a single platform that meets evolving needs of developers and the apps theyre creating.

Edit: Inaccurate funding figure removed.

The rest is here:

Akamai invests in Macrometa as the two strike partnership - TechCrunch

Categories
Cloud Hosting

API series – Section: The why & how of distributing GraphQL – ComputerWeekly.com

This is a contributed piece for the Computer Weekly Developer Network written by Daniel Bartholomew, CTO at Section.

Section is known for hosting and delivery of cloud-native workloads that are highly distributed and continuously optimised across a secure and reliable global infrastructure. Bartholomew is a regular speaker at industry events and experienced technologist in agile and containerised development.

His current role is to envision the technology organisations need to simplify and automate global delivery of cloud-native workloads.

Bartholomew writes as follows

Sources such as Cloudflare note that API calls are the fastest-growing type of Internet traffic and GraphQL APIs are rapidly becoming a de-facto way that companies interact with data. While REST APIs still dominate, GraphQL has a significant advantage: it prioritises giving clients exactly the data they request and nothing more.

As part of that, it can combine results from multiple sources including databases and APIs into a single response.

In short, its more efficient. So that can significantly impact bandwidth usage and application responsiveness and thereby both cost and performance.

However, the nature of the GraphQL structure means that caching responses for improved performance can be a significant challenge, so the secret to make GraphQL more efficient is distributing those GraphQL API servers so they operate (only and always) closer to end users, where and when needed.

Distributing application workloads is a go-to strategy to improve performance, reliability, security and a host of other factors.

When looking at API servers in particular, distribution results in high performance and reliability for the end user, lower costs for backend hosting, lower impact on backend servers, better ability to handle spikes, better security, cloud independence and (if done correctly) no impact on your development and management processes.

This last point is key, as deploying multi-cloud API services has historically been a largely manual process. But before we get to the how, lets dig a bit deeper into why you would want to distribute GraphQL servers.

The performance angle is straightforward: by reducing last-mile distance, latency and responsiveness are considerably improved. Users will experience this directly as a performance boost. In managing the network, you can control how broadly GraphQL servers are distributed, thereby balancing and tailoring performance and cost.

The cost factor is impacted by, among other things, data egress. API servers specifically and microservice architectures in general, are designed to be very chatty.

When using a hyperscaler for cloud hosting, those data egress costs quickly add up. While theres a lot that can be done to optimise and right-size the capacity and resource requirements, its incredibly difficult to optimise egress cost. Distributing GraphQL servers outside the hyperscaler environment (and potentially adding distributed caching with the solution) can minimise these traffic costs.

There are several aspects to decreasing the impact on backend services and the way in which the development teams operate.

Some are inherent to GraphQL: for instance, versioning is no longer an issue.

Without GraphQL, you have to be careful about versioning and updating APIs. With GraphQL as a proxy, you have flexibility. The GraphQL endpoint can remain the same even if the backend changes. Frontend and backend teams thus become more loosely connected, meaning they can operate at different paces, without blocking, so business moves faster. A given frontend can also have a single endpoint dedicated to it, called Backend For Frontend (BFF), which further improves efficiency.

If caching is employed along with distribution, the impact of traffic on backend services demand is decreased as API results themselves can be captured and stored for reuse. Distributed API caching, done well, greatly erodes the need for distributing the database itself and again cuts down on cost.

However, there are challenges with GraphQL when trying to connect data across a distributed architecture, particularly with caching.

With GraphQL, since you are using just one HTTP request, you need a structure to say, I need this information, hence you need to send a body. However, you dont typically send bodies from the client to the server with GET requests, but rather with POST requests, which are historically the only ones used for authentication. This means you cant analyse the bodies using a caching solution, such as Varnish Cache, because typically these reverse proxies cannot analyse POST bodies.

This problem has led to comments like GraphQL breaks caching or GraphQL is not cacheable.

While it is more nuanced than this, GraphQL presents three main caching issues:

CDNs are unable to solve this natively without altering their architecture. Some CDNs have created a workaround of changing POST requests to GET requests, which populates the entire URL path with the POST body of the GraphQL request, which then gets normalised. However, this insufficient solution means you can only cache full responses.

Bartholomew: Knows his API nuances and nuisances.

For the best performance, we want to be able to only cache certain aspects of the response and then stitch them together. Furthermore, terminating SSL and unwrapping the body to normalise it can also introduce security vulnerabilities and operational overhead.

GraphQL becomes more performant by using distribution to store and serve requests closer to the end user. It is also the only way to minimise the number of API requests.

This way, you can deliver a cached result much more quickly than doing a full roundtrip to the origin. You also save on server load as the query doesnt actually hit your API. If your application doesnt have a great deal of frequently-changing or private data, it may not be necessary to utilise edge caching, but for applications with high volumes of public data that are constantly updating, such as publishing or media, its essential.

While there are multiple benefits to distributing GraphQL servers, getting there is typically not easy as it requires a team to take on the burden of managing a distributed network. Issues like load balancing/shedding, DNS, TLS, BGP/IP address management, DDoS protection, observability and other networking and security requirements become front and center. At a more basic level, how do you manually manage, orchestrate and optimise potentially hundreds of GraphQL servers?

These are the types of issues that have led to the rise of distributed hosting providers. The best of these use automation to take on the burden of orchestration and optimisation, allowing organisations to focus on application development and not API delivery. That said, there are specific considerations when it comes to GraphQL.

First, it will be necessary to host GraphQL containers themselves, not just API functionalities, thus eliminating Function as a Service (FaaS) as a distribution strategy. Moreover, it will be necessary to run other containers alongside the GraphQL server to handle caching, security, etc.

Ideally, you also want to ensure scalability through unlimited concurrency, enabling the distributed GraphQL servers to support a large number of concurrent connections exceeding the source database connection limit.

In the end, whether you roll your own solution, or use one of the cloud-native hosting providers, distributing GraphQL API servers and other compute resources will significantly improve both the user experience and the overall cost and robustness of application services. In short, it makes all the sense in the world for developers.

Follow this link:

API series - Section: The why & how of distributing GraphQL - ComputerWeekly.com

Categories
Cloud Hosting

Sharon Woods: DISA Strives to Speed Up Cloud-Based Tech Delivery via Industry Partnerships – Executive Gov

Sharon Woods, director of the hosting and compute center at the Defense Information Systems Agency, said DISA intends to expand partnerships with industry to accelerate the delivery of new cloud-based platforms to warfighters, FCW reported Wednesday.

Woods said DISA is working on a fourth cooperative research and development agreement to come up with infrastructure code equipped with pre-configured, pre-accredited baselines to enable service personnel to develop cloud environments within hours instead of weeks or months.

Thats a really critical capability so that mission partners can get into the cloud quickly, she said at an event Wednesday.

Woods said her center has been working to align offerings with DISAs strategic plan for 2022 through 2024 and collaborating with the military and industry to determine private cloud services that could be fielded.

According to the report, DISAs hosting and computer center is developing DevSecOps tools to enhance software development and testing on-premise containers as a service to deliver automated configuration controls, security patching and other offerings.

More:

Sharon Woods: DISA Strives to Speed Up Cloud-Based Tech Delivery via Industry Partnerships - Executive Gov

Categories
Cloud Hosting

Big Tech could help Iranian protesters by using an old tool – MIT Technology Review

But these workarounds arent enough. Though the first Starlink satellites have been smuggled into Iran, restoring the internet will likely require several thousand more. Signal tells MIT Technology Review that it has been vexed by Iranian telecommunications providers preventing some SMS validation codes from being delivered. And Iran has already detected and shut down Googles VPN, which is what happens when any single VPN grows too popular (plus, unlike most VPNs, Outline costs money).

Whats more, theres no reliable mechanism for Iranian users to find these proxies, Nima Fatemi, head of global cybersecurity nonprofit Kandoo, points out. Theyre being promoted on social media networks that are themselves banned in Iran. While I appreciate their effort, he adds, it feels half-baked and half-assed.

There is something more that Big Tech could do, according to some pro-democracy activists and experts on digital freedom. But it has received little attentioneven though its something several major service providers offered until just a few years ago.

One thing people dont talk about is domain fronting, says Mahsa Alimardani, an internet researcher at the University of Oxford and Article19, a human rights organization focused on freedom of expression and information. Its a technique developers used for years to skirt internet restrictions like those that have made it incredibly difficult for Iranians to communicate safely. In essence, domain fronting allows apps to disguise traffic directed toward them; for instance, when someone types a site into a web browser, this technique steps into that bit of browser-to-site communication and can scramble what the computer sees on the back end to disguise the end sites true identity.

In the days of domain fronting, cloud platforms were used for circumvention, Alimardani explains. From 2016 to 2018, secure messaging apps like Telegram and Signal used the cloud hosting infrastructure of Google, Amazon, and Microsoftwhich most of the web runs onto disguise user traffic and successfully thwart bans and surveillance in Russia and across the Middle East.

But Google and Amazon discontinued the practice in 2018, following pushback from the Russian government and citing security concerns about how it could be abused by hackers. Now activists who work at the intersection of human rights and technology say reinstating the technique, with some tweaks, is a tool Big Tech could use to quickly get Iranians back online.

Domain fronting is a good place to start if tech giants really want to help, Alimardani says. They need to be investing in helping with circumvention technology, and having stamped out domain fronting is really not a good look.

Domain fronting could be a critical tool to help protesters and activists stay in touch with each other for planning and safety purposes, and to allow them to update worried family and friends during a dangerous period. We recognize the possibility that we might not come back home every time we go out, says Elmira, an Iranian woman in her 30s who asked to be identified only by her first name for security reasons.

Read more here:

Big Tech could help Iranian protesters by using an old tool - MIT Technology Review