top of page
Search
Writer's pictureService Ventures Team

A Look at How Cloud Computing Has Changed IT Landscape Over the Years


When people think of the cloud computing, often the first consideration that comes to mind is financial benefits: moving workloads from on premise data centers to the cloud reduces capital expenditures (CapEx), creates a granular IT consumption model although might increase operating expenditures (OpEx). But behind the pay-as-you-go pricing model, the cloud is merging the latest and greatest IT infrastructure trends, SW development approaches, and AI capabilities to build better and smarter applications faster. Cloud computing offers many opportunities that simply were not available when new software services required the purchase of new server hardware or enterprise software suites. What took six months to deploy on-premises take 10 minutes now in the cloud. What required signatures from levels of management to create on-prem can be charged to a credit card for usage in the cloud. But it’s not always a matter of time and convenience. The cloud now enables higher velocity for software development that leads to lower time to market. The cloud allows for more experimentation, which often leads to higher software quality.


Here are some cloud capabilities that enable real innovations to solve long-standing problems with on-premises IT infrastructure.


1. On Demand and Fast Compute

Gone are the days when you had to be in line for months if you needed a new database on an on-premises physical server. You could reduce the wait time if your company used virtual machines using VMware SW. But now you can create a server instance on a public cloud, provision it and run in about 15 minutes – and you can size it to your needs and turn it off when you’re not using it. Being able to bring up a VM with the operating system of your choice is convenient, but then you still need to install and license the applications you need. But being able to bring up a VM with the operating system and applications of your choice with pre-built virtual machine images that are ready to run is priceless.


2. Containers

A container is a lightweight executable unit of software, much lighter than a VM. A container packages application code and its dependencies, such as libraries. Containers share the host machine’s operating system kernel. Containers can run on Docker Engine or on a Kubernetes service. Running containers on demand has all the advantages of running VMs on demand, with the additional advantages of requiring fewer resources and costing less. Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. K8s was based on Google’s internal “Borg” technology. K8s clusters consist of a set of worker machines, called nodes, that run containerized applications. Worker nodes host pods, which contain applications. A control plane manages the worker nodes and pods. K8s runs anywhere and scales without bounds. All major public clouds have K8s services; you can also run K8s on your own development machine.


3. Serverless

“Serverless” means that a service or piece of code will run on demand for a short time, usually in response to an event, without needing a dedicated VM on which to run. If a service is serverless, then you typically don’t need to worry about the underlying server at all; resources are allocated out of a pool maintained by the cloud infrastructure. Serverless services, available on every major public cloud, feature automatic scaling, built-in high availability, and a pay-for-value billing model. If you want a serverless app without being locked into any specific public cloud, you could use a vendor-neutral serverless framework such as Kubeless.


4. Hybrid IT & Monitoring Services

Companies with large investments in data centers often want to extend their existing applications and services into the cloud rather than replace them with cloud services. All the major cloud vendors now offer ways to accomplish that, both by using specific hybrid services (for example, databases that can span data centers and clouds) and on-premises servers and edge cloud resources that connect to the public cloud, often called hybrid clouds. All clouds support some form of IT monitoring service that make it easy for you to configure your cloud services. The monitoring services often show you a graphical dashboard and can be configured to notify you of exceptions and unusual performance indicators.


5. Distributed and Edge Computing

Databases aren’t the only services that can benefit from running in a distributed fashion. The issue is latency. If compute resources are far from the data or from the processes under management, it takes too long to send and receive instructions and information. If latency is too high in a feedback loop, the loop can easily go out of control. If latency is too high between machine learning and the data, the time it takes to perform the training can blow up. To solve this problem, cloud service providers offer connected appliances that can extend their services to a customer’s data centers (hybrid cloud) or near a customer’s factory floors (edge computing). The need to bring analysis and machine learning geographically close to machinery and other real-world objects (the Internet of Things, or IoT) has led to specialized devices, such as miniature compute devices with GPUs and sensors, and architectures to support them, such as edge servers, automation platforms, and content delivery networks. Ultimately, these all connect back to the cloud, but the ability to perform local data analysis at the edge can greatly decrease the volume of data sent to the cloud as well as reducing the latency.


6. Geo-Scale Databases

Public clouds and several database vendors have implemented geo-scale distributed databases with underpinnings such as data fabrics, redundant interconnects, and distributed consensus algorithms that enable them to work efficiently and with 99.999% uptime. Cloud-specific examples include Google Cloud Spanner (relational), Azure Cosmos DB (multi-model), Amazon DynamoDB (key-value and document), and Amazon Aurora (relational). Other DB vendors include CockroachDB (relational), PlanetScale (relational), Fauna (relational/serverless), Neo4j (graph), MongoDB Atlas (document), DataStax Astra (wide column), and Couchbase Cloud (document).


7. Accessible AI

AI and Machine learning training, especially deep learning, often requires substantial compute resources for hours to weeks. But machine learning prediction needs its compute resources for seconds per prediction, unless you’re doing batch predictions. HW and SW hosted in cloud is often the most convenient way to accomplish model training and predictions. Deep learning with large models and large datasets needed for accurate training can take weeks when run on clusters of CPUs. On the other hand, GPUs, TPUs, and FPGAs can cut training time down significantly, and when such resources are in the cloud, it becomes easy and relatively cheaper to use them when needed. Additionally, many AI services can be performed well by pre-trained models, for example language translation, text to speech, and image identification. All the major cloud services offer pre-trained AI services based on robust models. Sometimes pre-trained AI services don’t do exactly what you need. Transfer learning, which trains only a few neural network layers on top of an existing model, can give you a customized service relatively quickly compared to training a model from scratch. Again, all the major cloud service providers offer transfer learning, although they don’t all call it by the same name.


Several of these above cloud innovations in IT are enabling other domain specific innovations. Used together with other innovation trends, Cloud could result in macro disruptions at staggering scale.



/Service Ventures Team

20 views0 comments

Recent Posts

See All

Comments


bottom of page