Rise of the Neoclouds

Emergence of a new class of cloud computing providers, focussed on powering ai innovators to achieve new levels of functionality at scale.

In partnership with

It’s likely you’ve heard of the large multipurpose cloud computing providers such as AWS, Google Cloud and Microsoft Azure. Also referred to as hyperscalers, these titans of cloud infrastructure power a significant portion of the online enterprises we engage with every day.

General purpose cloud computing adoption is ever increasing, with many years of growth to come however, there is an emergence of newer, leaner and more focussed cloud providers, known as neoclouds, who are carving out growing niches for themselves, and may eventually set the standards for the future of IT infrastructure.

So what is the difference between the general purpose cloud provider business model compared to neoclouds, and how are neoclouds delivering at such pace?

The Difference

General Purpose Cloud

Hyperscalers tend to offer a broad menu of infrastructure incorporating many thousands of services for every type of workload and work type. This spans storage, compute, networking, database management, analytics and more besides.

They tend to be less flexible with regards to pricing, catering for larger enterprise clients as a priority, due to the extremely lucrative opportunities they bring. One large customer can bring as much revenue to a hyperscaler than hundreds of smaller customers.

It’s also worth noting that hypersalers themselves have significant compute demands. Between Amazon, Google and Microsoft, they need more compute and storage than most other global brands, therefore, their desire is to produce services that serve their own needs, and then expand for their clients needs also.

In fact, AWS started out as a way to address Amazons internal storage and compute demands. The opportunity to sell spare web service capacity to others as a service was born as a way to manage the cost of their own exponentially growing demands.

As a consequence of the dual priority to serve internal and customer demands, the level of flexibility that may exist at the hyperscaler end will of course be able to suit most needs, but may not be suitable for emerging and more specific needs.

A key benefit of hyperscalers is the reassurance of security and stability that they bring. Businesses whose online presence is their lifeblood cannot afford any downtime or performance issues, hence why they choose trusted hyperscalers to build their businesses on.

Enter the neoclouds.

This emerging group of cloud providers are typically leaner, more flexible and many are scaling fast. Examples include CoreWeave, Lambda Labs, Voltage Park, Crusoe and Nebius.

They typically provide high performance computing infrastructure purpose-built for AI, and lean heavily on delivering faster and more cost-efficient compute for training and running AI models.

Neoclouds are usually more opitmized to serve AI-native startups, research labs and machine learning engineering teams, however, over time it would not be unreasonable to expect neoclouds to tap into broader and larger markets.

Neocloud Providers:

  • Deploy AI infrastructure in days/weeks versus weeks/months for traditional hyperscalers.

  • Cost savings: AI compute from neoclouds can cost up to 66% less per GPU/hour.

  • Lean operations mean faster onboarding, scaling, and support-for-AI-specific needs.

  • Serve AI-native organizations, research labs, agile startups, anyone needing high-end, flexible, scalable GPU power for ML/AI workloads.

Hyperscalers:

  • Provide broad reliability and global presence, but may suffer AI resource waitlists and higher costs for GPU-rich workloads.

  • Billing complexity and general-purpose platform overhead.

  • Massive broad enterprise base including retail, finance, healthcare, governments.

Attribute

Traditional Cloud (Hyperscaler)

Neocloud Provider (AI-Focused)

Scope

General purpose, broad services

Singular focus: AI, GPU compute

Hardware Architecture

CPU-led, some GPUs

GPU-led, bare-metal optimized

Speed

Weeks/months for onboarding

Days/weeks for onboarding

Flexibility

Predefined, less optimized

Customizable, AI-optimized

Billing

Instance/hour, complex

Usage/job-based, transparent

Cost

Higher for GPU workloads

Up to 66% less for comparable GPU

Developers Experience

IT generalist, wide integrations

ML/AI specialist, prebuilt environments

Skip the AI Learning Curve. ClickUp Brain Already Knows.

Most AI tools start from scratch every time. ClickUp Brain already knows the answers.

It has full context of all your work—docs, tasks, chats, files, and more. No uploading. No explaining. No repetitive prompting.

ClickUp Brain creates tasks for your projects, writes updates in your voice, and answers questions with your team's institutional knowledge built in.

It's not just another AI tool. It's the first AI that actually understands your workflow because it lives where your work happens.

Join 150,000+ teams and save 1 day per week.

CoreWeave

Let’s take two of the most talked about neoclouds at the moment to understand the keys to their success, starting with CoreWeave.

CoreWeave began as a crypto mining company and eventually refocussed to cloud infrastructure after identifying high demand from generative AI and machine learning use cases. CoreWeave eventually reestablished itself as an AI cloud provider with a focus on providing highly sought after NVDIA GPUs.

Their go to market approach includes leasing out their high end GPUs for AI training and inference to users.

Unlike many neoclouds CoreWeave prioritized large scale businesses and enterprise clients, including Microsoft and OpenAI, who make up a significant portion of the client concentration.

CoreWeave’s growth is aided by their flexibility, providing hourly, reserved or job-based billing which appeals to their target market.

Due to the highly capital intensive nature of AI cloud infrastructure provision, CoreWeave’s spending levels are high, with over $14 billion raised to date.

Whilst the investment has been significant, there is an expectation that rewards will eventually outweigh the investment due to the revolutionary nature of AI adoption across many industry sectors.

Other core strengths that enabled CoreWeave to scale from the early 2020s to its current size include:

  • Rapid scalability: 32 datacenters including over 25,000 GPUs with a significant multi-year backlog of orders that help them raise debt to fuel their expansion.

  • Optimization: They boast industry leading cluster performance with up to 20% system efficiency compared with most rivals.

  • Partnerships: Rather than working on their own, CoreWeave established relationships with the biggest players including NVDIA, who’s AI GPUs are considered the most advanced, and Microsoft and OpenAI who are global leaders in software and AI innovation.

Such is the depth of the NVDIA relationship that NVDIA also invested in CoreWeave, giving them credibility and increased chance to be near the top of the list when the next NVDIA GPUs are available. This inevitably increases speed to market and creates customer demand.

Taking an asset light approach by leasing datacenters and GPU clusters creates flexibility and enables faster growth compared with owning real estate. Whilst this does create benefits, it may also become a hindrance in future as CoreWeave has less ability to influence change or understand the intimate make up of their datacenters, which could result in missed business opportunities.

One of the key facts that stand out for me about how CoreWave operates is their ability to move fast and create mutually beneficial relationships with other leading companies.

As a leader of change initiatives, building coalitions and seeking win-win scenarios has served to be crucial for product delivery success.

Superhuman AIKeep up with the latest AI news, trends, and tools in just 3 minutes a day. Join 1,000,000+ professionals.

NEBIUS

Nebius was born out of the cloud arm of Russian conglomerate Yandex, often known as ”the Russian Google”.

Yandex divested its Russian businesses forming the Nebius Group, which includes it’s core business providing AI infrastructure.

Whilst focus was initially on the European market, it quickly became apparent that demand for fast, secure and reliable AI infrastructure was high across multiple markets, most notably the US.

CEO Arkady Volozh, famed for his leadership at Yandex, steers Nebius with seasoned colleagues and a team of deep engineering veterans from Yandex and elsewhere. Like CoreWeave, Nebius is also backed by NVIDIA amongst other ventures, boosting both technical and market credibility.

Business Model:

  • Nebius is vertically integrated, meaning the business designs, own and operates its own AI data centers rather than CoreWeave’s approach to using third-party leasing.

  • Offers a full-stack AI cloud platform (Nebius AI Studio, Nebius AI Cloud) with enterprise-grade, developer-friendly features, focusing on seamless model development and deployment. Although they are a start-up, they have the talent and performance of hyperscalers.

  • Prioritize cost efficiency, robust performance, and high reliability through in-house engineering and rigorous operational discipline.

Reflections on how Nebius is succeeding:

Talented Team, Mission Driven

  • With any new product development, there is no substitute for highly motivated and talented teams.

  • Whilst Nebius is a relatively new company, many of their engineers were inherited from their former parent company, which meant they knew how to work together and held the required skills to work at scale immediately.

Cost Control:

  • Being painfully aware of the extremely high infrastructure set up costs, Nebius considered strategies to invest in other businesses with the option to potentially sell them at higher margins to fund the core infrastructure business.

  • Even if they did not sell the other businesses they owned stakes in, the very fact that those businesses could appreciate in value, would also increase Nebius’ value, enabling them to secure debt at lower rates due to higher valuations.

  • When delivering change initiatives or developing products, the ability to endure periods of high investment with well-formed strategies is key, until the investments begin to bear fruit, therefore, the creativity that Nebius has displayed in managing such risks by investing in other high growth businesses is worth noting.

Scaling strategy:

  • Nebius has a clear approach to scale. Whilst CoreWeave focuses initially on a small group of enterprise clients, which gives credibility for others to potentially follow, Nebius started with AI-native teams and startups as their early client base.

  • As they build out their GPU clusters, funded by their high number of smaller clients, Nebius will have sufficient compute to be able to serve the larger volume demands of enterprise clients in future.

  • In doing so, they will also have additional knowledge of the needs of smaller companies, which will enable the to adapt and potentially improve their offerings for the more lucrative enterprise clients. This measured scaling approach with creatively prudent cost management sets Nebius apart from its peers ad lowers execution risk.

In Summary

All in all, it is apparent that a number of up and coming AI-focussed infrastructure teams are building out their offerings at extreme pace, to enable their clients to innovate in AI, and to capitalize on the opportunities that lie ahead as AI use cases continue to emerge.

Successful neoclouds appear to seek opportunities before others, invest in their people, and identify methods to fund their growth in a way that is sustainable for their needs until they reach their customer service and profitability goals. All of these characteristics give insight that can be applied to any team delivering innovation.

Reading

In case you missed it

That’s it for this edition, for more delivery leadership insights, subscribe to the Change Leaders Playbook podcast series on Youtube, Spotify, Apple and Audible.

p.s.

How was this article?

Your feedback helps to make future posts even more relevant and useful.

Login or Subscribe to participate in polls.

Reply

or to participate.