|
The past three years have been very eventful and turbulent for the world, including the tech industry. We have seen a global pandemic, inflation, geopolitical unrest, climate change catastrophes, Mars exploration, chaotic stock markets, mass layoffs, real estate booms, and the futuristic advent of Metaverse. Amidst this tumultuous socio-economic and political environment, the hayride of Tech Hyperscalers, which seemed to be perpetual, is experiencing a slowdown.
The Hyperscalers – Apple, Microsoft, Google, Amazon, and Meta – have collectively lost USD 2.2 Tn in stock market value this year alone, and there are frequent layoffs or hiring slowdowns across all kinds of organizations.
These macroenvironmental and economic factors have impacted the overall mood of the tech industry, driving a shift towards more practical and empathetic strategies that were reflected in Google’s Developer Conference, I/O ’22, which was back after a 3-year hiatus at Shoreline Amphitheater in Mountain View, with a string of announcements and puzzles (quite literally)!
While a lot was announced around the Pixel product portfolio, where they gave attendees a sneak peek of the Pixel 7 and Pixel 7 Pro, Pixel Watch (the first smartwatch fully built by Google), Pixel 6a and its new premium wireless earbuds Pixel Buds Pro, what really stood out on a holistic level was Google’s overarching effort to make advances in its understanding of data and the state of computing which also aligns with its mission of making information more accessible and useful.
Alphabet CEO, Sundar Pichai and his team reflected a very direct, empathetic, and functional delivery of all the announcements this time, unlike the past I/Os which were generally more theatrical and gave the impression of being a fantasy land for tech enthusiasts.
Sundar demonstrated how Google is leveraging Artificial Intelligence (AI) to improve their products, making them more helpful, more accessible for all types of users. As of 2021, Google’s parent company Alphabet spent USD 31.56 Bn on research and development across its many properties, and most of the investment has been made in search optimization, Machine Learning (ML), image recognition, and natural language understanding. Some of these have culminated in very interesting use cases and were displayed during I/O. For example, they showcased the Google Docs feature in which the company’s AI algorithms automatically summarize a long document into a single paragraph.
Prabhakar Raghavan who is Senior VP at Google, demonstrated a feature called “multisearch,” in which a user takes a picture of a leaky faucet and orders the required part to fix it. In another example, Google showed how one can find a picture of a specific dish, like Korean stir-fried noodles, and then search for nearby restaurants that serve that dish. These use cases show how Google leverages ML and image recognition to solve very mundane, yet important issues users face in everyday life.
Google’s serverless containerized application Cloud Run also garnered a lot of attention given its unique value proposition. We evaluated Cloud Run in its competitive landscape, which revealed some interesting insights.
As an extension to its Cloud service, Google launched Cloud Run jobs for developing and deploying containerized apps using any language or operating system libraries, or even binaries. Cloud Run was beta launched in 2019, adding to their, then-rapidly growing serverless compute stack. As the demand for serverless climbed, services like Cloud Run jobs are an attempt to catch up to rivals, Azure and Amazon Web Services.
Cloud Run brings together the portability of containers and the scalability of serverless computing. It allows users to write code in any language they choose, using any binary, without having to stress manage the underlying core infrastructure.
As organizations aim to become digital innovation hubs, there will be an increased demand for products and services to be equipped with digital native speed and scale. This can be provided by the containerization of applications. As per industry predictions, the popularity of containerized apps is expected to grow significantly by the end of 2022.
Containerized applications is not a new term for the hyperscalers that are dominating the market. For example, Amazon’s AWS Fargate and Microsoft’s Azure Container Instances (ACI) are close rivals to Google’s Cloud Run in terms of segmentation. However, both have their own limitations, which makes Cloud Run a very lucrative offering in the world of containerized serverless apps.
One major advantage of Cloud Run is its intuitive developer experience and ease of use. Compare that with Amazon Fargate, where many users complained that the amount of configuration required is both ambiguous and tedious. While it is good to have configuration options, there should also be a well-structured path that can be followed.
Another big advantage of Cloud Run versus AWS Lambda is concurrency, as each Lambda instance can only do one request at a time. So, the time spent waiting on network requests (e.g., DB query) is money down the drain.
The promise of a serverless container platform is to deliver developer experience like that of PaaS. AWS Fargate on EKS requires DevOps to do quite a bit of heavy lifting before the developers can deploy.
Azure Container Instances, on the other hand, has a very fundamental flaw that is hard to ignore. Unlike Google Cloud Run and Fargate, Azure Container Instances does not use a scheduler. There is no scaling of any kind, and one can only run single replica containers isolated from each other, which means that Azure Container Instances cannot be used in production for anything where scaling is required.
Managing the servers means provisioning and configuring them as well as worrying about scaling when traffic fluctuates. Thus, there is always a risk of overprovisioning resources and paying more than what is required. Additionally, traditional serverless solutions are known to limit the array of languages available and require many code changes. These drawbacks are eliminated in Google’s Cloud Run, and offers several benefits like:
1. Elasticity with Automatic Scaling: Depending on how light or heavy the traffic is, Google’s Cloud Run can automatically scale the application up or down, which is ideal for elastic workloads. This eliminates the need for predicting utilization by allocating sufficient nodes.
2. Pay-per-Use Model: A strong associate to Cloud Run’s elasticity is its Pay-per-use feature (billed to the nearest 100 milliseconds and only for the execution of services). Cloud Run pricing is entirely consumption-based. After users exhaust the free tier, they pay for four components:
Unlike other Cloud functions, where users are charged for each request independently, in Cloud Run, multiple requests can share the allocated CPU and memory with parallel executions, so concurrency helps in optimizing spending.
3. Ease of Use: Cloud Run can execute workloads in three easy steps –
• write your application using the language of choice,
• package the app as a container, and
• deploy it where you want.
4. Portability: Since Cloud Run accepts standard container images and is built on native open standards, applications can be easily moved to any Kubernetes Cluster on GCP, On Prem, or to any other Cloud, thus seamlessly enabling portability of workloads across platforms.
5. Environmental Impact: With the increasing environment consciousness in customers, there is immense potential for a pay-per-use model, as it not only provides economic benefits but also aligns with the sustainability agenda of many organizations. At the Cloud Run launch event, Google’s customer Veolia, talked about the potential of this feature. This could be a major winning proposition for Cloud Run as the focus towards sustainable Cloud solutions is becoming paramount among today’s enterprises, especially those that are in Europe
Cloud Run has the potential to lead the containerized app segment, given how much value it drives, vis-à-vis its competitors. i.e., the intuitive developer experience, concurrency, and ease of scaling.
Cloud Run is appropriate for developers who do not wish to manage infrastructure and want to focus on writing code and bundling binaries within a container for the task at hand. If more complex configuration is required, GKE or another container environment that offers additional features may be more appropriate.
Cloud Run also deals very smoothly with the two major issues associated with FaaS (Function-as-a-Service) serverless model, i.e., vendor lock-in and cold starts. With Google Cloud Run, one doesn’t need to worry about either of these big issues for serverless containers, as its containers can move anywhere. Also, with Cloud Run’s concurrency model, the cold start problem is also handled to a fair degree.
While the Cloud wars prevail among the big three hyperscalers, it will be interesting to see how this segment will pan out for Google as the demand for serverless containers go up. With Google Cloud entering the fray alongside AWS and Azure, it’s a waiting game now to see if Cloud Run will help Google become the king of serverless compute.