|
In the current Generative AI boom, each hyperscaler is carving out a unique AI focus, and for good reason. As business leaders worldwide struggle to implement AI into their daily workflows, the search for solutions grows stronger. Top hyperscalers in the world are competing to capture this unmet market share, but it’s not a one-size-fits-all approach. Projections indicate that while the percentage of Small Medium Businesses (SMBs) and large enterprises that utilized Generative AI in 2022 was 5% and 25% respectively, those values are set to increase 2.8X by 2026. Virtually every firm on the planet will need to implement a piece of this technology to keep up with the curve.
Our analysis indicates that top hyperscalers are taking four distinct approaches. Microsoft benefits from its first mover advantage and consumer accessibility yet struggles to alleviate concerns from business users regarding data protection. Google trailed just behind, addressing those data privacy protection concerns with additional encryption, including consumer solutions but catering to enterprises. AWS, on the other hand, is creating a customizable developer ecosystem – empowering individuals to build their own solutions and ultimately increase longevity for their usage of AWS. Meta, uncharacteristically leveraging hyperscaler partners, then announced their own large language model (LLM), which is open-source and free for enterprises and consumers alike. Hyperscalers are all utilizing unique strategies to capture their desired market shares.
Executives from world-leading technology companies were forced to act fast, but their Generative AI solutions are here. The first to the jump… ChatGPT, produced by Microsoft and OpenAI. Being the pioneering, readily available Generative AI solution on the market, comes with clear benefits but has also paved the way for competition to refine and build competing products. Microsoft Azure and OpenAI did not wait. In 2019, Microsoft’s USD 1 Bn (about USD 3 per person in the US) investment into OpenAI opened the gates for high-scale production, quietly adding another USD 2 Bn to the cause, and still, looking to add additional billions.
ChatGPT’s free, direct-to-consumer, open-source platform has allowed individuals from all around the globe to tap into a customized bot response that provides tailored search engine results. For most people not in the tech world, this is their first interaction with the revolutionary technology that is Generative AI. Currently supporting over 100 Mn users, the website generates roughly 1.8 Bn visits per month – a number that continues to climb. While these numbers are record-breaking, ChatGPT’s problems are surfacing as competing developers have had time to bolster their solutions.
Seemingly, Microsoft and OpenAI deemed the key to winning this race to be getting there first. Strategically focused on providing immediate access to the general consumer, Microsoft granted themselves the opportunity to monetize mass user data. At its core, Microsoft, with ChatGPT, is optimally set up to target a consumer-first strategy. However, being one of the industry leaders, Microsoft has of course developed extensive, supplementary applications focused on enterprise applicability. One of which, Copilot (an AI assistant), is delivered across Dynamics 365, M365, Teams, Viva Sales Security Copilot, and numerous other Microsoft applications. Including integration with ChatGPT, Copilot is just one example of how Microsoft is able to utilize consumer applications to enhance their enterprise offerings. With over 4,000 enterprise customers already utilizing Azure Open AI services, Microsoft is illustrating their ability to provide a strong emphasis not only on the consumer, but on enterprise applications as well.
End to end, Microsoft’s go-to-market strategy allows for coverage on a significant portion of the Generative AI addressable market. Not only were Microsoft and OpenAI first to jump, but they have also strongly positioned themselves as one of the “brand names” among Generative AI chatbots and beyond.
A 2023 launch versus a 2022 launch: was it too late for Google?
With ChatGPT taking the Generative AI world by storm, Alphabet, a long-recognized Data and AI leader, became next in line to release a product of its own. Google and Alphabet CEO, Sundar Pichai, announced the introduction of their Generative AI products, Bard and Vertex AI. Google’s decision to focus Vertex specifically on enterprise customers differentiates their approach to the market from the prior consumer-focused products.
Although there were imperfections in the initial launch, Google’s executive team ultimately decided it was more effective to release technology that hadn’t been perfected rather than lag behind in the chatbot market. However, Google was not alone in this strategy. Numerous other hyperscalers and chasing companies are still releasing their own versions of applicable AI and as early as possible, to accelerate learning and keep up with the curve. Google’s differentiator though, is the extra time to launch. This allowed Google engineers to build an extensive IP privacy protection system – a different focus than ChatGPT. With the ability to lean on several B2C offerings already provided by Google (Google Search, Photos, Maps, YouTube, etc.), the team has continued to implement their Generative AI solutions into these products. Even though Google Bard is purely a customer-facing tool and not generally sold to enterprises, the firm’s strategic pairing with its Vertex applications, provides coverage to an enormous amount of the addressable Generative AI market.
With developed B2C solutions, the key for Google to gain their desired market share, is a focus on enterprises. With extensive IP protection, the team is now able to target enterprise customers with reassurance that their information would be protected, unlike other AI products. Vertex’s Generative AI suite containing enterprise search, app builder and more, allows Google to focus on capturing the B2B market share that other hyperscalers may not hone in on. Today, 70% of Generative AI enterprise start-ups rely on Google Cloud and its AI capabilities, further showing enterprises as the key focus.
So where is Google going from here? While not considered a unique strategy, they’re tapping into one of the biggest assets that the firm possesses, the “Google” search engine. The team has announced that Bard will be infused into its iconic search tool. Leveraging their 93% share of global search engine usage, Google’s strategic implementation of this bot into their server and suite of name brand products will guarantee more visibility and usage of their technology.
As it states on the server, “Bard is an experiment… Bard will not always get it right… Bard will get better with your feedback.” Utilization of unprecedented access to first-party data will provide ample opportunity for refinements in Bard and other Generative AI solutions. Bard and Vertex are on the rise; with the most powerful search engine in the world as its backer, they have no ceiling.
With the first two major releases in Generative AI – ChatGPT and Bard – already focusing on direct to consumers and direct to enterprise, AWS could either follow suit or choose an alternative approach. Microsoft and Google strategically focused on directly providing solutions to both ends of the market. But is this the most effective strategy? AWS doesn’t believe so. Amazon strategists instead decided to focus first on building their developer ecosystem from the ground up – facilitating creativity within developers. By providing engineers with the tools to build out their preferred use cases, the AWS team has created an entire developer ecosystem market for themselves.
At the core, AWS is providing developers with the tools to build, train, and deploy their own models. Pulling in solutions from leading AI start-ups, combined with in-house Amazon development, programmers are now able to select from 4 foundation models (FMs) – Jurassic-2 (AI21labs), Claude (Anthrop\c), Stable Diffusion (stability.ai), and Amazon Titan (Amazon). “The easiest way to build and scale Generative AI applications with FMs,” – AWS is supplying developers with the core needed to build out personalized Generative AI. Currently, there are 150+ third-party foundation models that can be pulled into AWS solutions – providing ample opportunity for partnerships. AWS is strategically creating LLM frameworks for the roughly 27 Mn developers around the world who will be most likely to leverage and utilize Generative AI tools.
But that’s not all, AWS also announced the general availability of CodeWhisperer. Free for individual developers, this AI companion uses the newly improved Titan FM to enhance developer productivity by generating code recommendations in real time. Not only that, but citizen developers are able to leverage these tools to produce improved application capabilities for their businesses – all with just a text prompt in a natural language. Radically improving speed and accuracy among coders, Amazon executives are confident that CodeWhisperer facilities the “most secure way to generate personalized code.”
By starting with the ecosystem itself, AWS is utilizing a varied approach to the Generative AI market. Some believe the firm is missing out on a revolutionary consumer and enterprise market share opportunity. Even so, Amazon has announced no plans to directly release their own chatbot. Instead, focusing on selling their infrastructure and platform capabilities, with core and citizen developers leading the way for AWS. Even launching dedicated AI chipsets, Inferentia and Tranium, developers can easily train LLMs. From this positioning, AWS doesn’t need to directly compete with other abundantly popular Generative AI applications. They are creating and capturing their own market share – the developers themselves.
Amazon has been strategically investing in the Generative AI space and is clearly leveraging other avenues to capture their market share, the developer ecosystem. AWS executives hold firm that one AI model will not “rule the world” – supporting the clear difference in strategy from their competitors.
In an unusual move, Meta is partnering with top hyperscalers to provide its open-source large language model. Usually competitors, these strategic partnerships enable the tech giants to provide another powerful solution in the Generative AI space. The key difference is that Meta has decided to make Llama 2 a free, open-source solution for both research and commercial use – provided on platforms like AWS Sagemaker, Jumpstart, Hugging Face, and other major Cloud providers. Code for the LLM is openly accessible and aligns with Meta’s philosophy as well –
“Giving businesses, start-ups, entrepreneurs, and researchers access to tools developed at a scale that would be challenging to build themselves, backed by computing power they might not otherwise access, will open up a world of opportunities for them to experiment, innovate in exciting ways, and ultimately benefit from economically and socially.”
By democratizing AI, Meta executives are choosing to strategically target a wider range of enterprises that can build inside their products for free. Opening their model to the public allows for faster and more efficient feedback, testing, and iteration – all of which are pertinent to creating the most “capable” LLM. Not only that, but their co-opetition strategy pushes them to be one of the preferred partners within the core developer community. From this angle, Meta’s open-source LLMs are a direct competition and threat to the other closed source hyperscaler technologies. Today, none of the other hyperscalers provide any type of open-source competition like Meta’s transparent, easy to access, and customizable solution.
Comparatively, Meta has a less diverse profit stream to its hyperscaler counterparts. However, with continued investments on Generative AI and the Metaverse, the future is bright for the diversification of their profit portfolio – a future of both B2B and B2C offerings. While many may see these partnerships as a rare occurrence, top tech executives at both firms are leveraging each other to create room for themselves in the booming Generative AI landscape.
The Generative AI addressable markets go miles further than chatbots. Key areas include AI ad production, AI infrastructure-as-a-Service, AI storage & servers, and even AI gaming development, to name a few. Top developers for the leading hyperscalers in the world are targeting their efforts to tap into these markets – the race is on.
Where though, is the untouched market? The SMB segment. Hyperscalers are focused on capturing the general consumer and enterprise clients first, leaving the SMBs rather “unclaimed.” Not only does the SMB segment account for ~45% of the United States’ overall economy, but projections also indicate that more than 70% of these businesses will be utilizing Generative AI by 2026, compared to the roughly 5% now. Clearly, a huge addressable market. However, there needs to be a balance between the two segments. Hyperscalers who ditch their current strategy and split resources to focus on SMBs may not be able to come out as the leader in either market. Timing of their entry will be critical; too early and they ditch their core strategy, too late and they miss the market all together. As the Generative AI market matures, firms that stay true to their core vision and push the hardest on selected use cases will come out on top.
But it’s not a one-size-fits-all approach. Generative AI technology has been in the works for years, but just recently broken through into widespread use. Now, we look toward the future. The market opportunity for Generative AI is set to surge to USD 1.3 Tn by 2032. Whether firms are focusing on direct-to-consumer or supporting developer ecosystems, creating open-source or private LLMs, or providing more trust and security rather than calculated risks – executives will need to make these crucial decisions to effectively position their firm to capitalize on the Generative AI boom. The only key is… they don’t want to miss it.