Categories
Cloud Hosting

Brighton cloud company bringing 100 new skilled jobs to city – The Argus

Two businessmen are bringing 100 new skilled jobs to the city with their tech company.

Jon Lucas and Jake Madders are the co-directors of Hyve, a cloud hosting company.

They provide online services for companies with website and applications and host servers that run ecommerce sites.

Jake and Jon live in Shoreham, and Jake movedto the coast to help Hyve grow.

With their new office space in Circus Street, Jake said the company has a prime location to recruit Brighton residents.

Hyve's office space in Circus Street, Brighton (Image: Hyve)

He said: "We are soexcited to have our UK hub in Brighton.

"Welove Brighton,it's such a cool city and we can see it becoming the Silicon Valley of the UK."

According to Jake, being in Brighton means the company can recruit "local talent".

"We go to the tech fairs and the universities," said Jake.

"We have found it a great place to find people who are skilled inIT."

For Jake and Jon, Brighton is a place with "potential" to train young people too.

Hyve recruits university graduates and has positions to train people. It also has internships.

According to the pair, Hyve will create 100 skilled jobs in Brighton over the next five years.

Jake said this is a "conservative" estimate.

He said: "At the minute we have been recruiting ten to 20 staff every six to eight months, so 100 new jobs is probably on the lower end of the scale.

"We can't seem to recruit fast enough, we are looking for 16 people in the next month or two but will continue recruiting well into 2023."

The Hyve team (Image: Hyve)

Asked why theydecided to make Brighton their UK base, the pair said: "There is always something happening. Right now for example, we can see the Christmas market from our window.

"There's a real buzz about the city, lots of talent and so much culture too."

Jake and Jon, who both livewith their families,"have no plans to go elsewhere".

Jake said:"There area lot of bright people in Brighton, and we look forward to recruiting more of them in 2023."

Here is the original post:

Brighton cloud company bringing 100 new skilled jobs to city - The Argus

Categories
Cloud Hosting

Apache Iceberg promises to change the economics of cloud-based data analytics – The Register

Feature By 2015, Netflix had completed its move from an on-premises data warehouse and analytics stack to one based around AWS S3 object storage. But the environment soon began to hit some snags.

"Let me tell you a little bit about Hive tables and our love/hate relationship with them," said Ted Gooch, former database architect at the streaming service.

While there were some good things about Hive, there were also some performance-based issues and "some very surprising behaviors."

"Because it's not a heterogeneous format or a format that's well defined, different engines supported things in different ways," Gooch now a software engineer at Stripe and an Iceberg committer said in an online video posted by data lake company Dremio.

Out of these performance and usability challenges inherent in Apache Hive tables in large and demanding data lake environments, the Netflix data team developed a specification for Iceberg, a table format for slow-moving data or slow-evolving data, as Gooch put it. The project was developed at Netflix by Ryan Blue and Dan Weeks, now co-founders of Iceberg company Tabular, and was donated to the Apache Software Foundation as an open source project in November 2018.

Apache Iceberg is an open table format designed for large-scale analytical workloads while supporting query engines including Spark, Trino, Flink, Presto, Hive and Impala. The move promises to help organizations bring their analytics engine of choice to their data without going through the expensive and inconvenience of moving it to a new data store. It has also won support from data warehouse and data lake big hitters including Google, Snowflake and Cloudera.

Cloud-based blob storage like AWS S3 does not have a way of showing the relationships between files or between a file and a table. As well as making life tough for query engines, it makes changing schemas and time travel difficult. Iceberg sits in the middle of what is a big and growing market. Data lakes alone were estimated to be worth $11.7 billion in 2021, forecast to grow to $61.07 billion by 2029.

"If you're looking at Iceberg from a data lake background, its features are impressive: queries can time travel, transactions are safe so queries never lie, partitioning (data layout) is automatic and can be updated, schema evolution is reliable no more zombie data! and a lot more," Blue explained in a blog.

But it also has implications for data warehouses, he said. "Iceberg was built on the assumption that there is no single query layer. Instead, many different processes all use the same underlying data and coordinate through the table format along with a very lightweight catalog. Iceberg enables direct data access needed by all of these use cases and, uniquely, does it without compromising the SQL behavior of data warehouses."

In October, BigLake, Google Cloud's data lake storage engine, began support for Apache Iceberg, with Databricks format Delta and Hudi streaming set to come soon.

Speaking to The Register, Sudhir Hasbe, senior director of product management at Google Cloud, said: "If you're doing fine-grained access control, you need to have a real table format, [analytics engine] Spark is not enough for that. We had some discussion around whether we are going with Iceberg, Delta or Hudi, and our prioritization was based customer feedback. Some of our largest customers were basically deciding in the same realm and they wanted to have something that was really open, driven by the community and so on. Snap [social media company] is one of our early customers, all their analytics is [on Google Cloud] and they wanted to push us towards Iceberg over other formats."

He said Iceberg was becoming the "primary format," although Google is committed to supporting Hudi and Delta in the future. He noted Cloudera and Snowflake were now supporting Iceberg while Google has a partnership with Salesforce over the Iceberg table format.

Cloudera started in 2008 as a data lake company based on Hadoop, which in its early days was run on distributed commodity systems on-premises, with a gradual shift to cloud hosting coming later.

Today, Cloudera sees itself as a multi-cloud data lake platform, and in July it announced its adoption of the Iceberg open table format.

Chris Royles, Cloudera's Field CTO, told The Register that since it was first developed, Iceberg had seen steady adoption as the contributions grew from a number of different organizations, but vendor interest has begun to ramp up over the last year.

"It has lots of capability, but it's very simple," he said. "It's a client library: you can integrate it with any number of client applications, and they can become capable of managing Iceberg table format. It enables us to think in terms of how different clients both within the Cloudera ecosystem, and outside it the likes of Google or Snowflake could interact with the same data. Your data is open. It's in a standard format. You can determine how to manage, secure and own it. You can also bring whichever tools you choose to bear on that data."

The result is a reduction in the cost of moving data, and improved throughput and performance, Royles said. "The sheer volume of data you can manage the number of data objects you can manage and the complexity of the partitioning: it's a multiplication factor. You're talking five or 10 times more capable by using Iceberg as a table format."

Snowflake kicked off as a data warehouse, wowing investors with its so-called cloud-native approach to separating storage and compute, allowing a more elastic method than on-prem-based data warehousing. Since its 2020 IPO which briefly saw it hit a value of $120 billion the company has diversified as a cloud-based data platform, supporting unstructured data, machine learning language Python, transactional data and most recently Apache Iceberg.

James Malone, Snowflake senior product manager, told El Reg that cloud blob storage such as that offered by AWS, Google and Azure is durable and inexpensive, put could present challenges when it comes to performance analytics.

"The canonical example is if you have 1,000 Apache Parquet files, if you have an engine that's operating on those files, you have to go tell it if they these 1000 tables with one parquet file a piece or if it is two tables with 500 parquet files it doesn't know," he said. "The problem is even more complex when you have multiple engines operating on the same set of data and then you want things like ACID-compliance and like safe data types. It becomes a huge, complicated mess. As cheap durable cloud storage has proliferated it has also put pressure downward pressure on the problem of figuring out how to do high-performance analytics on top of that. People like the durability and the cost-effectiveness of storage, but they also there's a set of expectations and a set of desires in terms of how engines can work and how you can derive value from that data."

Snowflake supports the idea that Iceberg is agnostic both in terms of the file format and analytics engine. For a cloud-based data platform with a steadily expanding user base, this represents a significant shift in how customers will interact with and, crucially, pay for Snowflake.

The first and smallest move is the idea of external tables. When files are imported into an external table, metadata about the files is saved and a schema is applied on read when a query is run on a table. "That allows you to project a table on top of a set of data that's managed by some other system, so maybe I do have a Hadoop cluster that I have a meta store that that system owns the security, it owns the updates, it owns the transactional safety," Malone said. "External tables are really good for situation like that, because it allows you to not only query the data in Snowflake, but you can also use our data sharing and governance tools."

But the bigger move from Snowflake, currently only available in preview, is its plan to build a brand-new table type inside of Snowflake. It is set to have parity in terms of features and performance with a standard Snowflake table, but uses Parquets as the data format, and Iceberg as the metadata format. Crucially, it allows customers to bring their own storage to Snowflake instead of Snowflake managing the storage for them, perhaps a significant cost in the analytics setup. "Traditionally with the standard Snowflake table, Snowflake provides the cloud storage. With an Iceberg table, it's the customer that provides the cloud storage and that's a huge shift," Malone said.

The move promises to give customers the option of taking advantage of volume discounts negotiated with blob storage providers across all their storage, or negotiate new deals based on demand, and only pay Snowflake for the technology it provides in terms of analytics, governance, security and so on.

"The reality is, customers have a lot of data storage and telling people to go move and load data into your system creates friction for them to actually go use your product and is not generally a value add for the customer," Malone said. "So we've built Iceberg tables in a way where our platform benefits work, without customers having to go through the process of loading data into Snowflake. It meets the customer where they are and still provides all of the benefits."

But Iceberg does not only affect the data warehouse market, it also has an impact on data lakes and the emerging lakehouse category, which claims to be a useful combination of the data warehouse and lake concepts. Founded in 2015, Dremio places itself in the lakehouse category also espoused by Databricks and tiny Californian startup Onehouse.

Dremio was the first tech vendor to really start evangelizing Iceberg, according to co-founder and chief product officer Tomer Shiran. Unlike Snowflake and other data warehouse vendors, Dremio has always advocated an open data architecture, using Iceberg to bring analytics to the data, rather than the other way around, he said. "The world is moving in our direction. All the big tech companies have been built on an open data architect and now the leading banks are moving with them."

Shiran said the difference with Dremio's incorporation of Iceberg is that the company has used the table format to design a platform to support concurrent production workloads, in the same way as traditional data warehouses, while offering users the flexibility to access data where they have it, based on a business-level UI, rather than the approach of Databricks, for example, which is more designed with data scientists in mind.

While Databricks supports both its own Delta table standard and Iceberg, Shiran argues that Iceberg's breadth of support will help it win out in the long run.

"Neither is going away," Shiran said. "Our own query engine supports both table formats, but Iceberg is vendor agnostic and Apache marshals contributions from dozens companies including Netflix, Apple and Amazon. You can see how diverse it is but with Delta, although it is technically open source, Databricks is the sole contributor."

However, Databricks disputes this line. Speaking to The Register in November, CEO and co-founder Ali Ghodsi said there were multiple ways to justify Delta Lake as an open source project. "It's a Linux Foundation. We contribute a lot to it, but its governance structure is in Linux Foundation. And then there's the Iceberg and Hudi, which are both Apache projects."

Ghodsi argued the three table formats Iceberg, Hudi and Delta were similar and all were likely to be adopted across the board by the majority of vendors. But the lakehouse concept distinguishes Databricks from the data warehouse vendors even as they make efforts to adopt these formats.

"The data warehousing engines all say they support Iceberg, Hudi and Delta, but they're not optimized really for this," he said. "They're not incentivized to do it well either because if they do that well, then their own revenue will be cannibalized: you don't need to pay any more for storing the data inside the data warehouse. A lot of this is, frankly speaking, marketing by a lot of vendors to check a box. We're excited that the lakehouse actually is taking off. And we believe that the future will be lakehouse-first. Vendors like Databricks, like Starburst, like Dremio will be the way people want to use this."

Nonetheless, database vendor Teradata has eschewed the lakehouse concept. Speaking to The Register in October, CTO Stephen Brobst argued that a data lake and data warehouse should be discrete concepts within a coherent data architecture. The argument plays to the vendor's historic strengths in query optimization and supporting thousands of concurrent users in analytics implementations which include some of the world's largest banks and retailers.

Hyoun Park, CEO and chief analyst at Amalgam Insights, said most vendors are likely to support all three table formats Iceberg, Delta and Hudi in some form or other, but Snowflake's move with Iceberg is the most significant because it represents a departure for the data warehouse firm in terms of its cost model, but also how it can be deployed.

"It's going to continue to be a three-platform race, at least for the next couple of years, because Hudi benchmarks as being slower than the other two platforms but provides more flexibility in how you can use the data, how you can read the data, how you can ingest the data. Delta Lake versus Iceberg tends to be more of a commercial decision because of the way that the vendors have supported this basically, Databricks on one side and everybody else on the other," he said.

But when it comes to Snowflake, the argument takes a new dimension. Although Iceberg promises to extend the application of the data warehouse vendor's analytics engine beyond its environment potentially reducing the cost inherent in moving data that will come at a price: the very qualities that made Snowflake so appealing in the first place, Park said.

"You're now managing two technologies rather than simply managing your data warehouse which was which is the appeal of Snowflake," he said. "Snowflake is very easy to get started as a data warehouse. And that ease of use is the kind of that first hit, that drug-like experience, that gets Snowflake started within the enterprise. And then because Snowflakes pricing is so linked to data use, companies quickly find that as their data grows 50, 60, 70, or 100 percent per year. Their Snowflake bills increase just as quickly. Using Iceberg tables is going to be a way to cut some of those costs, but it comes at the price of losing the ease of use that Snowflake has provided."

Apache Iceberg surfaced in 2022 as a technology to watch to help solve problems in data integration, management and costs. Deniz Parmaksz, machine learning engineer with customer experience platform Insider, recently claimed it cut their Amazon S3 Cost by 90 percent.

While major players including Google, Snowflake, Databricks, Dremio and Cloudera have set out their stall on Iceberg, AWS and Azure have been more cautious. With Amazon Athena, the serverless analytics service, users can query Iceberg data. But Azure Ingestion from data storage systems that provide ACID functionality on top of regular Parquet format files such as Iceberg, Hudi, Delta Lake are not supported. Microsoft has been contacted for clarity on its approach. Nonetheless, in 2023, expect to see more news on the emerging data format which promises to shake up the burgeoning market for cloud data analytics.

Here is the original post:

Apache Iceberg promises to change the economics of cloud-based data analytics - The Register

Categories
Cloud Hosting

MSP vs Vms: What Are the Differences? – StartupGuys.net

Managed service providers (MSPs) have become the norm for businesses that cant handle the expenses of a full-time IT department. As such, businesses are often comparing MSPs to virtual managed services (VMS).

So, is one better than the other? Before you make your decision, you need to know the difference.

Check out our guide below to learn all about MSP vs VMS and which one is right for your company.

MSPs can manage a companys entireIT infrastructure, from servers and storage to networks and applications. The goal of MSPs is to reduce IT overhead and reduce the complexity of managing IT services. An MSP (Managed Service Provider) is an organization that provides businesses with the following:

MSPs offer services such as:

They can also provide services like:

Many MSPs offer a range of solutions, ranging from basic to advanced, to meet the needs of clients.

VMS (Virtual Managed Services) is a type of IT service management that provides users with virtualized IT infrastructure capabilities. It creates a network infrastructure that is:

This can be done without the added costs associated with traditionalphysical IT equipmentand equipment maintenance. VMS virtual managed services enable customers to get the most from their IT investments. This is also done without the cost and complexity of managing and maintaining physical infrastructure.

VMS services typically includeall stuff needed for the virtual network, such as:

Additionally, VMS virtual managed services provide customers with access to the following:

All these allow them to free up staff time to do more strategic work. A bonus means ensuring that their IT needs will continue to be met. VMS enables businesses to save time and money while ensuring their IT infrastructure is meeting their organizational needs.

MSP is a model where a business outsources certain professionals or technology services to a managed service provider. This allows businesses to concentrate on their core business functions. This allows them to leverage a service providers expertise to manage and optimize their IT operations.

VMS are virtual machines. These of which are virtualized systems running on one or more physical machines. VMS is an efficient way for businessesthatwant to reduce workloads. This results in reducing costs and improving performance.

Understanding these core concepts is essential for businesses. This is especially if they are looking to increase efficiency and maximize their profitability. Leveraging the tools and expertise of an MSP and taking advantage of the benefits of a VMS are key steps in improving an organizations IT operations in todays market.

When it comes to comparing MSP and VMS, there are pros and cons for both.

MSP offers a comprehensive customer tracking and reporting system. This comes along with the power of the Microsoft suite of products.The idea can be great for businesses that have a lot of customer data to manage or tasks that require a lot of collaboration.

It does, however, have an associated cost, and users need to keep up to date with the latest version if they want to take advantage of the latest features. However, you can also findaffordable MSPtoolsthat can still fit your business needs.

VMS is a cloud-based project management solution. They provide a much simpler way of running projects with no upfront cost and the ability to roll out changes quickly. However, the scalability might be limited, and certain features can be missing compared to MSP.

Ultimately, the decision between MSP and VMS depends on the particular business needs and the associated budget.

MSP versus VMS (Virtual Machine Services) is an intriguing comparison when it comes to platform flexibility. MSP is a managed service provider that offers cost savings on IT services, while VMS is a system that provides cloud computing, virtualization, and other platforms.

Both platforms offer a variety of services and capabilities. For example, VMS offers users the ability to deploy applications in the cloud, while MSPs can provide a customized experience with dedicated hosting and a variety of services.

MSPs are more customizable compared to VMS. It allows clients to access the ability to tailor their hosting environment and applications specific to their needs. However, VMS typically offers more features than individual MSPs.

VMS provides scalability with the ability to scale up or down easily. This flexibility gives users the ability to adjust their environment as needs change.

Overall, both MSPs and VMS offer platform flexibility for varying degrees. The difference lies in what you need to suit individual needs better. Before making a decision, it is important to weigh the pros and cons of each option to find the right solution.

MSP and VMS models have quickly become the go-tosoftware solutionsfor businesses. These businesses are especially those seeking cost-effective applications.

However, the world of MSP vs VMS programming can be hard to navigate. Oftentimes, businesses struggle to make sense of the complexity between the two.

Thats why businesses should always look for the best-fit solution. They must ensure they get the most out of their investment. Its important to weigh the pros and cons of each model and determine which one best aligns with the businesss goals.

Additionally, they should also consider important factors such as:

By thoroughly assessing the cost-benefit of MSP and VMS, businesses can make informed decisions that provide better cost-effective solutions for their needs.

Comparing MSP vs VMS and the many benefits of each can help you decide the best option for your company. Crucial for success is to figure out a plan that works for your company and your budget.

Take the time to weigh the pros and cons and get the most out of your plan. Dont hesitate to contact an experienced provider for advice and guidance. Make the decision today that helps secure your future success!

Should you wish to read more articles aside from this basicMSP guide andVMS guide, visit our blog.

View post:

MSP vs Vms: What Are the Differences? - StartupGuys.net

Categories
Cloud Hosting

5 Unstoppable Metaverse Stocks to Buy in 2023 – The Motley Fool

Some investors could have a case of the "metaverse mehs." There was a lot of buzz initially about a virtual universe that could generate trillions of dollars in annual revenue. But after the hype wore off, many once high-flying stocks with metaverse connections lost their luster.

However, the long-term potential for the metaverse remains as great as ever. And many of the companies that are best positioned to succeed in the future have other businesses that are already lucrative. Their stocks give investors a way to profit on the metaverse without betting the farm on it.Here are five unstoppable metaverse stocks to buy in 2023 (listed in alphabetical order).

Amazon (AMZN 2.17%) probably isn't the first stock that comes to mind when you think about the metaverse. However, the company could nonetheless profit tremendously from it. The metaverse will almost certainly operate in the cloud -- and Amazon Web Services (AWS) ranks as the largest cloud-hosting provider in the world.

Metaverse or not, AWS stands out as one of the top reasons to buy Amazon stock in 2023. The cloud-hosting unit continues to grow robustly. One analyst even thinks that AWS is on track to be worth $3 trillion down the road. Amazon as a whole is valued at around $870 billion right now.

Amazon stock is currently down close to 50% below its previous high. This steep decline is largely the result of macroeconomic headwinds and the company's higher spending. The former issue should only be temporary, while Amazon is already taking steps to address the latter. If you're looking for a surefire winner in the next bull market, I think Amazon is a top contender.

Apple (AAPL -3.74%) stands out as another company that isn't as closely identified with the metaverse as some other tech giants. However, that could soon change. Apple reportedly plans to launch a mixed-reality headset, potentially as soon as 2023. Analyst Ming-Chi Kuo, who closely follows the company, predicts that the new device will be "a game-changer for the headset industry." It could also signal a strong opening salvo for Apple's entrance into the metaverse race.

However, Apple doesn't need the metaverse to be successful. The company's iPhone-centered ecosystem continues to generate several hundred billion dollars in sales each year. Augmented reality and 5G adoption should be key drivers of this revenue growth regardless of whether or not the metaverse achieves its potential.

Apple also has other growth drivers. Advertising could become the company's next $10 billion business sooner than expected. Apple also has a major opportunity in the fintech market with Apple Pay. With the stock down close to 30%, buying Apple in the new year could pay off nicely when the inevitable market rebound comes.

No company has tied its fortunes to the metaverse in a more high-profile way thanMeta Platforms (META 3.66%). Once known as Facebook, Meta arguably has the grandest vision of what the metaverse could become. It's investing billions of dollars in building hardware and software to make the metaverse dream a reality.

But while the metaverse could be Meta's future, its social media empire pays the bills for now. Many investors are concerned that Facebook and Instagram are losing their appeal. However, Meta still raked in $27.7 billion in revenue in the third quarter of 2022 with $4.4 billion in profits. The number of daily active users on its social platforms increased both year over year and sequentially to 2.93 billion.

NYU professor Aswath Damodaran believes that there's pretty much all upside for Meta based on its valuation. When one of the most influential valuation experts in the world says that, it deserves attention. I'm not sure if Meta stock will necessarily be a big winner in 2023. Buying the stock in the new year, though, could pay off in a huge way over the long run.

Microsoft (MSFT -0.10%) is already partnering with Meta to bring its Teams collaboration software into the metaverse. The company is already a major player in gaming with Xbox. And if its pending acquisition of Activision Blizzard isn't blocked by regulators, Microsoft could become an even bigger force in the metaverse.

Like several others on this list, though, Microsoft's fortunes don't hinge on the metaverse. The company is the 800-pound gorilla in multiple massive markets, including operating systems and office productivity software.

Why buy Microsoft in 2023? For one thing, it's available at a discount with shares down nearly 30% year to date. Microsoft also has several avenues to jump-start its growth, including its Azure cloud-hosting unit and the move into digital advertising.

I'd putNvidia (NVDA -2.05%) near the top of the list of clear winners if the metaverse takes off as much as some think it will. The company's graphics processing units (GPUs) are the gold standard in running gaming apps. That advantage will likely carry over into the metaverse world.

However, Nvidia's opportunities extend far beyond gaming. In the third quarter of 2022, nearly 65% of the company's total revenue of $5.93 billion came from its data center segment. Professional visualization (which includes Nvidia's Omniverse metaverse platform) and automotive and embedded systems businesses contributed a little over $500 million of the total as well.

Sure, Nvidia has taken a beating this year. The stock is still down nearly 50% year to date even after rising quite a bit in recent months. But Nvidia plans to launch its new Grace CPU Superchip in 2023. Its data center and automotive units should continue to perform well in the new year. If the gaming market begins to recover in 2023, look for Nvidia stock to soar.

Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. John Mackey, CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Keith Speights has positions in Amazon.com, Apple, Meta Platforms, Microsoft, and Nvidia. The Motley Fool has positions in and recommends Activision Blizzard, Amazon.com, Apple, Meta Platforms, Microsoft, and Nvidia. The Motley Fool recommends the following options: long March 2023 $120 calls on Apple and short March 2023 $130 calls on Apple. The Motley Fool has a disclosure policy.

Original post:

5 Unstoppable Metaverse Stocks to Buy in 2023 - The Motley Fool

Categories
Cloud Hosting

Top 10 Middle East IT stories of 2022 – ComputerWeekly.com

This year has seen the Middle East region host one of the worlds biggest sporting events for the first time, when the FIFA World Cup arrived in Qatar in November.

Not only did the oil-rich nation face massive construction challenges, with stadiums and other physical infrastructure needed to host such a large and prestigious event, but it also had to be ready for inevitable cyber attacks.

Cyber security features heavily in this yearly review, with analysis of projects in the United Arab Emirates (UAE) and Saudi Arabia.

Hosting major sporting events might be something countries in the Middle East aspire to do more often as they diversify their economies and reduce their reliance on oil revenues. This top 10 also features articles about some of the new industries being created in the region, the huge sums being invested, as well as some of the challenges being faced.

Here are Computer Weeklys top 10Middle East IT stories of 2022.

Qatar hosts the FIFA World Cup this year the first time the event has been staged in the Arab world. Cyber security experts in the country predicted that ticketing, hotel bookings and restaurant reservations would be faked by hackers to capture personal data from people travelling to Qatar.

Also, phishing and social engineering was expected to be used to steal personal and financial information from anyone using the internet to get information about the tournament.

Saudi Arabias job market is largely shaped by the push for Saudization, a colloquial term for a movement that is officially called nationalisation.

Part of this push is a set of regulations called Nitaqat, which falls under the jurisdiction of the Ministry of Labour and Social Development, and requires organisations operating in Saudi Arabia to maintain certain percentages of Saudi nationals in their workforce.

A group of Google workers and Palestinian rights activists are calling on the tech giant to end its involvement in the secretive Project Nimbus cloud computing contract, which involves the provision of artificial intelligence and machine learning tools to the Israeli government.

Calls for Google to end its involvement in the contract follow claims made by Ariel Koren, a product marketing manager at Google for Education since 2015 and member of the Alphabet Workers Union, that she was pressured into resigning as retaliation for her vocal opposition to the deal.

A survey has revealed that UAE residents believe 3D printing technology will become widespread in the country, and expect it to have the most positive impact on society.

The online survey of more than 1,000 UAE citizens, carried out by YouGov, asked them for their opinions on 16 emerging technologies. According to YouGov: Data shows that of all the 16 listed technologies, UAE residents have most likely heard a lot about or have some awareness of cryptocurrency, virtual reality, self-driving cars and 3D printing.

The distinction between protecting information technology and protecting operational technology (OT) became very clear in 2010, when the Iranian nuclear enrichment facility Natanz was attacked by Stuxnetmalware.

OT includes programmable logic controllers, intelligent electronic devices, human-machine interfaces and remote terminal units that allow humans to operate and run an industrial facility using computer systems.

In a region that is experiencing an unprecedented increase in cyber security threats, the UAE is taking actions that are already paying off.

The increase in threats is described in the State of the market report 2021 and the State of the market report 2022 annual reports published by Help AG. These studies focus exclusively on digital security in the Middle East, highlighting the top threats and the sectors most impacted, and providing advice on where companies should invest their resources.

In September 2021, the Abu Dhabi Department of Health announced that it would create a drone delivery system to be used to deliver medical supplies medicine, blood units, vaccines and samples between laboratories, pharmacies and blood banks across the city.

The first version of the system will be based on a network of 40 different stations that drones fly in and out of. Over time, the number of stations is expected to grow.

Middle East-based IT leaders expect IT budgets for 2022 to be equal to, or above, pre-pandemic levels, with security spending expected to take the biggest share.

According to this years TechTarget/Computer Weekly annual IT Priorities survey, 63% of IT decision-makers in the Middle East region are planning to increase their IT budgets by 5% or more in 2022.

Accenture is to head up a consortium to develop and support a national payments infrastructure in the UAE that will enable next-generation payments.

Alongside suppliers G42 and SIA, the digital payments arm of Nexi Group, Accenture was selected by the Central Bank of the UAE to build and operate the UAEs National Instant Payment Platform over the next five years.

Saudi Arabia is investing $6.4bn in the digital technologies of the future and the tech startups that will harness them.

The announcement was made during a major new tech event, known as LEAP, in the Saudi capital Riyadh.

Go here to see the original:

Top 10 Middle East IT stories of 2022 - ComputerWeekly.com

Categories
Cloud Hosting

Potential cloud protests and maybe, finally, more JADC2 jointness … – Breaking Defense

Pentagon grapples with growth of artificial intelligence. (Graphic by Breaking Defense, original brain graphic via Getty)

WASHINGTON After military information technology and cybersecurity officials ring in the new year, theyll be coming back to interesting challenges in an alphabet soup of issues: JWCC, JADC2 and CDAO, to name a few.

Of all the things that are likely to happen in the network and cyber defense space, those are three key things Im keeping an especially close eye on in 2023. Heres why:

[This article is one of many in a series in which Breaking Defense reporters look back on the most significant (and entertaining) news stories of2022and look forward to what2023may hold.]

Potential JWCC Protests

On Dec. 7, the Pentagon awarded Amazon Web Services, Google, Microsoft and Oracle each a piece of the $9 billion Joint Warfighting Cloud Capability contract after sending the companies direct solicitations back in November.

Under the effort, the four vendors will compete to get task orders. Right now, its unclear when exactly the first task order will be rolled out or how many task orders will be made.

Its also possible that just like the Joint Enterprise Defense Infrastructure contract, JWCC could be mired in legal disputes, particularly when it comes to which vendor gets what task order.

As you know, with any contract, a protest is possible, Lt. Gen. Robert Skinner, director of the Defense Information Systems Agency, told reporters Dec. 8 following the JWCC awards. What we really focused on was, Here are the requirements that the department needs. And based on those requirements, we did an evaluation, we did market research, we did evaluation to see whichUS-based [cloud service providers] were able to meet those requirements The decision based on whether theres a protest or not really didnt play into it because we want to focus on the requirements and who could meet those requirements.

Sharon Woods, director of DISAs Hosting and Compute Center, said at the same briefing that under the acquisition rules, the task orders, theres a $10 million threshold and a $25 million threshold on protests.

So its really dependent on how large the task order is, she added.

If there is a protest, the DoD could potentially see delays in a critical program its been trying to get off the ground for years now.

A New Office To Oversee JADC2

After a year of a lot of back and forth about the Pentagons Joint All Domain Command and Control effort to better connect sensors to shooters, a new office has been stood up with the aim of bringing jointness to the infamously nebulous initiative.

In October, DoD announced the creation of the Acquisition, Integration and Interoperability Office housed within the Office of the Secretary of Defense. Dave Tremper, director of electronic warfare in Office of the Undersecretary of Defense for Acquisition and Sustainment, will lead the office, and the first task will be finding how to truly get JADC2 across the department, Chris ODonnell, deputy assistant secretary of defense for platform and weapon portfolio management in OUSD (A&S), said Oct. 27.

The creation of the office came a few months after Deputy Defense Secretary Kathleen Hicks said she wanted more high-level oversight of JADC2 and following complaints from military service officials.

Tracking The CDAO

Itll be interesting to see what the new Chief Digital and Artificial Intelligence Officer Craig Martell and his office will accomplish over the next year. Martell, a former Lyft exec, was tapped as the Pentagons first CDAO earlier in 2022.

As CDAO, Martell has some big responsibilities and cant pull on any prior Pentagon experience. When the CDAO officially stood up June 1, the office absorbed the Joint AI Center, Defense Digital Service and Office of Advancing Analytics all key parts of the Pentagons technology network. And there are plans to permit the chief data officer to report directly to the CDAO. (The CDO is operationally aligned to the office and has been rolled into one of its directorates, according to an internal DoD memorandum that was obtained by Breaking Defense in May.)

Already Martells priorities have slightly shifted: He initially thought his job would entail producing tools for DoD to do modeling, but over the first few months on the job, theres been a focus on driving high quality data. During his remarks at the DIA DoDIIS Worldwide Conference Dec. 13, Martell said what most people think and demand of artificial intelligence is magical pixie dust.

What theyre really saying is, excuse my language, Damn, I have a really hard problem and wouldnt it be awesome if a machine could solve it for me? he said. But what we really can deliver in lieu of that because Im here to tell you that we cant deliver magical pixie dust, sorry but what we can deliver is really high quality data.

Martell is also working to further other DoD efforts like zero trust, the Joint Warfighting Cloud Capability and JADC2. The Pentagon has set an ambitious goal of implementing zero trust across the department by 2027 and released a zero-trust strategy in November. The question remains as to what exactly a full implementation of zero trust will look like.

See the rest here:

Potential cloud protests and maybe, finally, more JADC2 jointness ... - Breaking Defense

Categories
Dedicated Server

HostColor.com Ends 2022 With 29 Locations For Delivery of Cloud … – Benzinga

HostColor.com (HC), has reported to the technology media that it ends 2022 with 29 Virtual Data Centers used for delivering Cloud infrastructure services. As of December 2022, the company delivers Hosted Private Cloud and Public Cloud Server services based on VMware ESXi, Proxmox VE, and Linux Containers' virtualization technologies, and 10Gbps Dedicated Servers from the following data center locations:

Localization of the Cloud services & More Bandwidth At Lower Costs

HostColor announced in November 2022 its Cloud infrastructure service priorities for 2023 - "Localization of the Cloud services" and "Increased bandwidth rate at fixed monthly cost". The company has also said that one of its major business goals for 2023 is to help SMBs take control of their IT infrastructure in a cloud service market, characterized by increasing cloud lock-in, imposed by Big Tech and the major cloud providers.

SMBs To Take Control Of Their IT Infrastructure?

"There are two simultaneously developing trends in the Cloud service market - a growing pressure on the smaller and medium IT infrastructure providers by the leading hyperscalers (compute clouds), and a growing dependence of Users of cloud services from the same those big major clouds. The Users' dependence comes to a point of de-facto cloud lock-in," says HostColor.com founder and CEO Dimitar Avramov. He adds that the biggest cloud infrastructure providers impose complex contractual and pricing terms and procedures that make transitioning data and services to another vendor's platform difficult and very costly.

"As a result of the hyperscalers' policies the cloud service users are highly dependent (locked-in) on a single corporate cloud platform. When it comes to the structure of the services and billing, the business models of the major technology clouds feature a complete lack of transparency. All this results in significant loss of money for SMBs that vary from a couple of thousands to millions of dollars on annual basis, depending on the cloud services they use." explains HostColor's executive. He adds that his company is determined to raise users' awareness about the cloud lock-in and to help as many business owners as it can, to move out their IT infrastructures from the major hyperscalers to smaller and medium cloud service providers.

Cloud computing experts have been long ringing the bell that the vendor lock-in in the cloud is real.

David Linthicum says in an article published at InfoWorld on July 2, 2021, that "Cloud-native applications have built-in dependencies on their cloud hosts, such as databases, security, governance, ops tools, etc." and that "It's not rocket science to envision the day when a cloud-native application needs to move from one cloud to another. It won't be easy."

In a publication in CIO.com titled "10 dark secrets of the cloud", the author Peter Wayner, warns Cloud Users "You're locked in more than you think" and adds that "Even when your data or the services you create in the cloud are theoretically portable, simply moving all those bits from one company's cloud to another seems to take quite a bit of time." Mr. Wayner also says that Uses of the major hyper-scalers are "paying a premium - even if it's cheap" and that performance of the major clouds "isn't always as advertised".

Internal research conducted by HostColor.com between 2019 - 2022 examines the terms of services, pricing, and the Cloud IaaS models of the five biggest cloud infrastructure providers. The research shows that their cloud service terms and pricing models feature a high level of opacity. This results in significant loss of money for their users that vary from a couple of thousands to hundreds of thousands of dollars on annual basis, depending on the services they use.

About HostColor

HostColor.com ( https://www.hostcolor.com ) is a global IT infrastructure and Web Hosting service provider since 2000. The company has its own virtual data centers, a capacity for provisioning dedicated servers and colocation services in 50 data centers worldwide. Its subsidiary HCE ( https://www.hostcoloreurope.com ) operates Cloud infrastructure and delivers dedicated hosting services in 19 European counties.

Release ID: 478856

Excerpt from:

HostColor.com Ends 2022 With 29 Locations For Delivery of Cloud ... - Benzinga

Categories
Dedicated Server

Top Web Hosting and VPS Services Reviewed – Digital Journal

Web hosting refers to the practice of hosting a website on a server so that it can be accessed by users over the internet. There are several types of web hosting options available, including shared hosting, virtual private server (VPS) hosting, and dedicated server hosting.

Shared hosting is the most basic and affordable type of web hosting. It involves sharing a single physical server and its resources with multiple websites. This means that each website shares the same CPU, RAM, and disk space as other websites on the server. Shared hosting is suitable for small websites with low traffic and limited resources.

VPS hosting, on the other hand, provides a more isolated and secure environment for hosting a website. In VPS hosting, a single physical server is divided into multiple virtual servers, each with its own resources and operating system. This allows each website to have its own dedicated resources, making it more performant and scalable than shared hosting. VPS hosting is a good option for websites with moderate traffic and resource requirements.

Dedicated server hosting is the most powerful and expensive type of web hosting. In this type of hosting, a single website is hosted on a physical server that is dedicated solely to it. This means that the website has access to all of the servers resources and is not sharing them with any other websites. Dedicated server hosting is suitable for large websites with high traffic and resource demands.

Cloud hosting is a type of web hosting that involves hosting a website on a network of virtual servers, which are distributed across multiple physical servers. This allows for greater scalability and flexibility, as the resources of the virtual servers can be easily adjusted to meet the changing needs of the website.

One of the main advantages of cloud hosting is its scalability. With traditional web hosting, if a website experiences a sudden increase in traffic, it may run out of resources and become slow or unavailable. With cloud hosting, the website can easily scale up its resources to meet the increased demand. This is done by adding more virtual servers to the network or increasing the resources of existing virtual servers.

Another advantage of cloud hosting is its reliability. With traditional web hosting, if a physical server goes down, the websites hosted on it will also be unavailable. With cloud hosting, the virtual servers are distributed across multiple physical servers, so if one server goes down, the other servers can continue to serve the website, ensuring that it remains available.

Cloud hosting is also generally more flexible than traditional web hosting, as it allows for the creation of custom configurations and the use of multiple operating systems. It also often includes additional features such as load balancing, automated backups, and monitoring.

Overall, cloud hosting is a good option for websites that require high scalability, reliability, and flexibility. Its often used by large websites with high traffic and resource demands, such as e-commerce websites and enterprise applications. However, it can also be a good choice for smaller websites that want to take advantage of the scalability and reliability of the cloud. We also recommend reading cloud hosting as well as WordPress hosting on CaveLions.

Press Release Distributed by The Express Wire

To view the original version on The Express Wire visit Top Web Hosting and VPS Services Reviewed

Read more:

Top Web Hosting and VPS Services Reviewed - Digital Journal

Categories
Dedicated Server

Tachyum Celebrates 2022 and Announces 2023 Series C and … – Business Wire

LAS VEGAS--(BUSINESS WIRE)--Tachyum ended 2022 with accomplishments including the worldwide debut of Prodigy, the worlds first universal processor for high-performance computing and more than a dozen commercialization partnerships, effectively moving the startup to a leadership position in semiconductors.

2022 marked the introduction of Tachyums Prodigy to the commercial market. Prodigy exceeded its performance targets and is significantly faster than any processors currently available in hyperscale, HPC and AI markets. With its higher performance and performance per-dollar and per-watt, Tachyums Prodigy processor will enable the worlds fastest AI supercomputer, currently in planning stages.

Tachyum signed 14 significant MOUs with prestigious universities, research institutes, and innovative companies like the Faculty of Information Technology at Czech Technical University in Prague, Kempelen Institute of Intelligent Technologies, M Computers, Picacity, LuxProvide S.A. (Meluxina supercomputer), Mat Logica, and Cologne Chip. Other agreements are in progress.

Technical Achievements

The launch of Prodigy followed the successful preproduction and Quality Assurance (QA) phases for hardware and software testing on FPGA emulation boards, and achievements in demonstrating Prodigys integration with major platforms to address multiple customer needs. These included FreeBSD, Security-Enhanced Linux (SELinux), KVM (Kernel-based Virtual Machine) hypervisor virtualization, and native Docker under the Go programming language (Golang).

Software ecosystem enhancements also included improvements to Prodigys Unified Extensible Firmware Interface (UEFI) specification-based BIOS (Basic Input Output System) replacement firmware, incorporating the latest versions of the QEMU emulator and GNU Compiler Collection (GCC). These improvements allow quick and seamless integration of data center technologies into Tachyum-based environments.

Tachyum completed the final piece of its core software stack with a Baseboard Management Controller (BMC) running on a Prodigy emulation system. This enables Tachyum to provide OEM/ODM and system integrators with complete software and firmware stack, and serves as a key component of the upcoming Tachyum Prodigy 4 socket reference design.

In its hardware accomplishments, Tachyum built its IEEE-compliant floating-point unit (FPU) from the ground upone of the most advanced in the world, with the highest clock speedsand progressed to running applications in Linux interactive mode on Prodigy FPGA hardware with SMP (Symmetric Multi-Processing) Linux and the FPU. This proved the stability of the system and allowed Tachyum to move forward with additional testing. It completed LINPACK benchmarks using Prodigys FPU on a FPGA. LINPACK measures a systems floating-point computing power by solving a dense system of linear equations to determine performance. It is a widely used benchmark for supercomputers.

The company published three technical white papers that unveiled never-before-disclosed architectural designs of the system-on-chip (SOC) and AI training techniques, revealing how Prodigy addresses trends in AI, enables deep learning workloads that are more environmentally responsible with lower energy consumption and reduced carbon emissions. One paper defined a groundbreaking high-performance, low-latency, low-cost, low-power, highly scalable exascale-flattened networking solution that provides a superior alternative to the more expensive, proprietary and limited scalability InfiniBand communications standard.

Around the world

Tachyum was a highlight of exhibits at Expo 2020 Dubai with the world premiere of the Prodigy Universal Processor for supercomputers, and presented Prodigy at LEAP22 in Riyadh, Saudi Arabia. Tachyum was named one of the Most Innovative AI Solutions Providers to watch by Enterprise World. Company executives were among the featured presenters at ISC High Performance 2022 and Supercomputing 2022 events.

Looking forward

With its Series C funding, expected to close in 2023, Tachyum will finance the volume production of Prodigy Universal Processor Chip and be positioned for sustained profitability, as well as increase headcount.

2023 will see the company move to tape-out, silicon samples, production, and shipments. After running LINPACK benchmarks using Prodigys FPU on a FPGA there are only four more steps to go before the final netlist of Prodigy: running UEFI and boot loaders loading Linux on the FPGA, completing vector-based LINPACK testing with I/O, followed by I/O with virtualization, RAS (Reliability, Availability and Serviceability).

Prodigy delivers unprecedented data center performance, power, and economics, reducing CAPEX and OPEX significantly. Because of its utility for both high-performance and line-of-business applications, Prodigy-powered data center servers can seamlessly and dynamically switch between workloads, eliminating the need for expensive dedicated AI hardware and dramatically increasing server utilization. Tachyum's Prodigy integrates 128 high-performance custom-designed 64-bit compute cores, to deliver up to 4x the performance of the highest-performing x86 processors for cloud workloads, up to 3x that of the highest performing GPU for HPC, and 6x for AI applications.

Follow Tachyum

https://twitter.com/tachyum https://www.linkedin.com/company/tachyum https://www.facebook.com/Tachyum/

About Tachyum

Tachyum is transforming AI, HPC, public and private cloud data center markets with its recently launched flagship product. Prodigy, the worlds first Universal Processor, unifies the functionality of a CPU, a GPU, and a TPU into a single processor that delivers industry-leading performance, cost, and power efficiency for both specialty and general-purpose computing. When Prodigy processors are provisioned in a hyperscale data center, they enable all AI, HPC, and general-purpose applications to run on one hardware infrastructure, saving companies billions of dollars per year. With data centers currently consuming over 4% of the planets electricity, predicted to be 10% by 2030, the ultra-low power Prodigy Universal Processor is critical to continue doubling worldwide data center capacity every four years. Tachyum, co-founded by Dr. Radoslav Danilak is building the worlds fastest AI supercomputer (128 AI exaflops) in the EU based on Prodigy processors. Tachyum has offices in the United States and Slovakia. For more information, visit https://www.tachyum.com/.

Continue reading here:

Tachyum Celebrates 2022 and Announces 2023 Series C and ... - Business Wire

Categories
Dedicated Server

Could ‘Peer Community In’ be the revolution in scientific publishing … – Gavi, the Vaccine Alliance

In 2017, three researchers from the National Research Institute for Agriculture, Food and the Environment (INRAE), Denis Bourguet, Benoit Facon and Thomas Guillemaud, founded Peer Community In (PCI), a peer-review-based service for recommending preprints (referring to the version of an article that a scientist submits to a review committee). The service greenlights articles and makes them and their reviews, data, codes and scripts available on an open-access basis. Out of this concept, PCI paved the way for researchers to regain control of their review and publishing system in an effort to increase transparency in the knowledge production chain.

The idea for the project emerged in 2016 following an examination of several failings in the science publishing system. Two major problems are the lack of open access for most publications, and the exorbitant publishing and subscription fees placed on institutions.

Even in France, where the movement for open science has been gaining momentum, half of publications are still protected by access rights. This means that they are not freely accessible to citizens, journalists, or any scientists affiliated with institutions that cannot afford to pay scientific journal subscriptions. These restrictions on the free circulation of scientific information are a hindrance to the sharing of scientific knowledge and ideas at large.

Moreover, the global turnover for the academic publishing industry in science, technology and medicine is estimated at US$10 billion for every 3 million articles published. This is a hefty sum, especially given that profit margins enjoyed by major publishing houses have averaged at 35-40% in recent years. Mindful of these costs and margins, the PCI founders wanted scientists and institutions to take back control of their own publishing. And so, in 2017, the Peer Community In initiative was born.

PCI sets up communities of scientists who publicly review and approve pre-prints in their respective fields, while applying the same methods as those used for conventional scientific journals. Under this peer-review system, editors (known as recommenders) carry out one or more review rounds before deciding whether to reject or approve the preprint submitted to the PCI. Unlike virtually all traditional journals, if an article is approved, the editor must write a recommendation outlining its content and merits.

This recommendation is then published along with all other elements involved in the editorial process (including reviews, editorial decisions, authors responses, etc.) on the site of the PCI responsible for organising the preprint review. This level of transparency is what makes PCI unique within the current academic publishing system.

Lastly, the authors upload the finalised, approved and recommended version of the article free of charge and on an open access basis to the preprint server or open archive.

PCI is making traditional journal publication obsolete. Due to its de facto peer-reviewed status, the finalised, recommended version of the preprint is already suitable for citation. In France, PCI-recommended preprints are recognised by several leading institutions, review committees and recruitment panels at the National Centre for Scientific Research (CNRS). At the Europe-wide level, the reviewed preprints are recognised by the European Commission and funding agencies such as the Bill and Melinda Gates Foundation and the Wellcome Trust.

PCI is also unique in its ability to separate peer review from publishing, given that approved and recommended preprints can still be submitted by authors for publication in scientific journals. Many journals even advertise themselves as PCI-friendly, meaning that when they receive submissions of PCI-recommended preprints, they take into account the reviews already completed by PCI in order to speed up their editorial decision-making.

This initiative was originally intended exclusively for PCIs to review and recommend preprints, but authors were sometimes frustrated to only see their recommended preprint on dedicated servers (despite being reviewed and recommended, preprints are still poorly indexed and not always recognised as genuine articles) or having to submit it for publication in a journal at the risk of being subjected to another round of review. However, since the creation of Peer Community Journal, scientists now have access to direct, unrestricted publishing of articles recommended by disciplinary PCIs.

Peer Community Journal is a diamond journal, meaning one that publishes articles with no fees charged to authors or readers. All content can be read free of charge without a pay-wall or other access restrictions. Designed as a general journal, Peer Community Journal currently comprises 16 sections (corresponding to the PCIs in operation) and is able to publish any preprint recommended by a disciplinary PCI.

Currently there are 16 disciplinary PCIs (including PCI Evolutionary Biology, PCI Ecology, PCI Neuroscience and PCI Registered Reports) and several more are on the way. Together, they boast 1,900 editors, 130 members in the editorial committees and more than 4,000 scientists-users overall. PCI and Peer Community Journal are recognised by 130 institutions worldwide, half of which (including the University of Perpignan Via Domitia) support the initiative financially. The number of French academics who are familiar with and/or who use PCI varies greatly between scientific communities. The percentage is very high among communities with a dedicated PCI (e.g., the ecology or evolutionary biology communities, with PCI Ecology and PCI Evol Biol, wherein an estimated half of scientists are now familiar with the system), but remains low among those without one.

To date, more than 600 articles have been reviewed through the system. Biology maintains a significant lead, but more and more fields are popping up, including archaeology and movement sciences. There is still plenty of scope for growth, in terms of greater investment from those familiar with the system and the creation of new PCIs by scientists from fields not yet represented by the current communities.

Other open-science initiatives have been set up across the globe, but none have quite managed to emulate the PCI model. Mostly limited to offers of peer-reviewed preprints (often directly or indirectly requiring a fee), these initiatives, such as Review Commons and PreReview, do not involve an editorial decision-making process and are therefore unable to effect change within the current publishing system.

While the PCI model is undeniably growing and now garners more than 10,000 unique visitors per month across all PCI websites, the creation of Peer Community Journal shows that the traditional academic publishing system is still intact. And it will doubtless endure into the near future, even though the preprint approval offered will hopefully become a sustainable model due to its cost-effectiveness and transparency across the board.

In the meantime, PCI and Peer Community Journal present a viable alternative for publishing diamond open access articles that are completely free of charge for authors and readers. In these changing times of unbridled, unjustifiable inflation placed on subscription and publishing prices, numerous institutions and universities are backing the rise of these diamond journals. PCI and Peer Community Journal embrace this dynamic by empowering all willing scientific communities to become agents of their own review and publishing process.

When science and society nurture each other, we reap the benefits of their mutual dialogue. Research can draw from citizens own contributions, improve their lives and even inform public decision-making. This is what we aim to show in the articles published in our series Science and Society, A New Dialogue, which is supported by the French Ministry of Higher Education and Research.

Denis Bourguet, Inrae; Etienne Rouzies, Universit de Perpignan, and Thomas Guillemaud, Inrae

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Denis Bourguet is co-founder of Peer Community In and Peer Community Journal and president of the Peer Community In association.

Thomas Guillemaud is co-founder and works on the operation of Peer Community In and Peer Community Journal. Peer Community In has received over 100 funding from public bodies including the Ministry of Higher Education and Research, numerous universities and research organisations since 2016.

Etienne Rouzies does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

INRAE provides funding as a founding partner of The Conversation FR.

Universit de Perpignan provides funding as a member of The Conversation FR.

View post:

Could 'Peer Community In' be the revolution in scientific publishing ... - Gavi, the Vaccine Alliance