Products
Services
Pricing Documentation
Servercore
Blog
Computing
Data Storage and Processing
Network Services
Security
Machine Learning and Artificial Intelligence
Home/Blog/Articles/What is Cloud Computing? Types of Cloud-Based Technologies and Services

What is Cloud Computing? Types of Cloud-Based Technologies and Services

1
Brief History of Cloud Computing
2
Why Is It Called Cloud Computing?
3
How Does Cloud Computing Work?
4
Main Cloud Service Models: IaaS, PaaS, SaaS
5
Serverless
6
Benefits of Using the Cloud
7
Pros and Cons of Cloud Computing
8
Why Have a Hard Time Trusting Cloud-Based Services
9
Virtualization and Security of Cloud Service
10
Types of Cloud Services
11
Server Locations
12
Energy Efficiency of Cloud Solutions

Cloud computing refers to the infrastructure, solutions, software, and resources offered over the internet, the so-called cloud. It is a broad term that encompasses vCPUs, databases, GPUs, and Data Analytics Virtual Machines (DAVM). Cloud computing features three principal software delivery models: IaaS (Infrastructure as a Service), PaaS (Platform as a Service), and SaaS (Software as a Service).

The use of cloud computing services and cloud resources empowers businesses to efficiently scale up without grand investments into bare metal infrastructure. This may include, for example, setting up virtual desktops for geographically dispersed teams or remote work. In this article, we’ll discuss the various types and models of cloud computing and analyze their return on investment.

Brief History of Cloud Computing

The term “cloud computing” itself is relatively young. First instances of it being searched on Google date back to the end of 2007. The term was gradually supplanting “grid computing” in search results. IBM was one of the first companies to introduce the concept to a worldwide audience, launching its Blue Cloud in early 2008. The project featured a series of “computing offerings” that would enable companies to get access to a distributed, globally accessible network of resources, rather than local or remote servers. Blue Cloud was built upon IBM’s experience with high-performance systems, open standards, and open-source software. To understand how we’ve reached this point in technology, let’s take a look at a few milestones.

The idea of cloud computing was formulated by Joseph Licklider as far back as 1970. In those years, he played a significant role in establishing ARPANET (Advanced Research Projects Agency Network). Licklider envisioned a world where everyone would be connected to a global network, receiving data and programs from it. John McCarthy, another prominent computer scientist, proposed the concept of delivering computing power to users as a service. However, these ideas were way ahead of their time, so they had to wait for technological advancements to catch up in the 90s.

Higher internet bandwidth in the 90s didn’t lead to a significant breakthrough in development, as almost no ISP was prepared for such a revolution. However, the simple fact that networks were getting faster did spur the advancements in cloud computing.

One of the most notable events of this era was the launch of Salesforce.com in 1999. The company was the first ever to provide its services over the internet, thus prototyping the SaaS model.

The next milestone was the launch of Amazon Web Services in 2002. At first, it offered cloud-based storage and computing solutions, but in 2006, the company announced Amazon Elastic Compute Cloud (EC2), which allowed users to run their own applications.

In 2008, Microsoft revealed its own cloud computing plans (it’s funny that the company is now floating the idea of OS-less laptops, with the OS fully moved to the cloud) that manifested themselves in what was then called Windows Azure, a full-fledged cloud services platform.

Why Is It Called Cloud Computing?

Cloud services provider owns a server farm, located at a data center. Depending on business needs, corporate clients can rent server cores for micro-projects, like running Telegram bots, or entire racks with numerous virtual machines. One use case would be to run a large e-commerce marketplace. Computing resources offered by the provider are virtualized. Virtualization, in a nutshell, allows one physical computer to perform the tasks of multiple computers by distributing its resources across several virtual environments (logical partitions). Using a hypervisor, a computer can be partitioned into virtual machines, each with a dedicated software suite needed for a particular task.

This way, the provider delivers Infrastructure as a Service using its own cloud (for example, via an open-source solution) or a vendor’s cloud. And users can access this infrastructure and projects from any location around the world.

How Does Cloud Computing Work?

The core of cloud computing is the on-demand availability of resources. It uses a pay-as-you-go model, which can be implemented in a variety of ways, from the end-of-month payment for your usage to an upfront cost, where unused resources are carried over to the next month. Either way, clients get complete transparency.

Examples of Cloud Computing

  • Cloud Databases. Cloud databases as a service enable clients to swiftly deploy new replicas and masters, ensuring fault tolerance, and facilitating work with APIs and Terraform.
  • Machine Learning. Dedicated ML platforms and specialized virtual machines help minimize training overhead and address issues related to data logging and backups.
  • Managed Kubernetes. Managed Kubernetes is typically the go-to for complex applications with a microservice architecture. This method aids in setting up container auto-healing and quickly deploying clusters, even when GPU support is needed. Additionally, it empowers businesses by allowing them to relegate responsibilities and avoid spending valuable resources on an in-house team.

Main Cloud Service Models: IaaS, PaaS, SaaS

IaaS

Infrastructure as a Service (IaaS) is a delivery model where the provider supplies the needed hardware and assumes responsibility for its proper functioning, connections, and ability to deliver agreed-upon computational power. This model can be useful for startups that may require a lot of computational power to launch an analytics platform when investing in bare metal hardware can be quite risky. Servercore offers bespoke cloud servers across three regions, utilizing partnered data centers in Kenya (Nairobi), Kazakhstan (Almaty), and Uzbekistan (Tashkent). The cloud server can be set up in several minutes, giving clients the ability to easily scale up the infrastructure as new features are rolled out.

This model allows companies to freely install and configure various software on the leased infrastructure, with specific tasks and team capabilities in mind. This solution has several advantages. Let’s outline the top five:

  • Providers know how to quickly deploy new servers, meaning that businesses can more easily push features into prod and complete their projects faster.
  • The IaaS provider acquires local legislation compliance certificates and data protection certificates, thereby relieving clients of these responsibilities. The provider also takes upon itself all processes related to scaling up the computing resources as well as other hardware-related things, such as cross-wiring.
  • This is the best solution for businesses that want to manage security themselves, wishing to leverage their InfoSec expertise, or due to having specific data security needs in mind.
  • Hardware is chosen and replaced by the provider in case of malfunctions.
  • The model incurs operational expenses, which are easier to manage.

PaaS

Platform as a Service (PaaS) is a model where the provider offers a ready-made or semi-ready (no configurations) application.

These solutions feature both the virtual infrastructure (processing units, RAM, storage) and specific software installed on the virtual server.

PaaS solutions are frequently used by engineers and developers who wish to streamline their workflow and free-up in-house resources for business objectives. At the initial stages, PaaS directly impacts the speed of releases, making this model suitable for both large services and startups.

The approach is perfectly suitable for developing, implementing, migrating, and deploying apps. One use case is using object storage to host unstructured data to train ML models.

SaaS

Software as a Service provides a ready-to-use solution that is suited to specific business needs. For example, retailers likely have no need for a UX research platform built from scratch: the market has plenty of offerings in this segment.

The majority of services fall under SaaS, such as NoCode and LowCode website builders, task trackers, or poll hosting services. Another unique characteristic of SaaS is a subscription-based payment model, rather than pay-as-you-go.

The primary advantage of SaaS is that ready-made solutions are more cost-effective, eliminating the need for investments in proprietary software. This is particularly beneficial for companies without in-house software engineers.

While SaaS may seem like a silver bullet, this model does come with some shortcomings, though not cost-related ones. To drive innovation, large corporations often need to tackle specific challenges. SaaS may not always offer the required flexibility, even if the solution allows leaving feedback or requesting a new feature to be added. However, waiting for a single feature to be added is usually cumbersome, so companies tend to develop their own solutions in these cases.

Here’s how responsibilities are divided in each of the three main models:

However, there’s a fourth model that, while not as popular yet, also comes with its own advantages.

Serverless

Serverless is a cloud computing paradigm that allows developers to create and launch applications without the need to manage server infrastructure. Rather than concern themselves with server configuration and scaling up the infrastructure, the developers can concentrate on coding and developing application features.

Serverless is fundamentally based on the Function as a Service (FaaS) model. In this context, functions are snippets of code that carry out specific tasks.

Developers have the ability to create and upload functions to a cloud platform, which then automatically oversees their execution. When a function is called, the platform automatically adjusts its scale depending on the current load. Serverless also enables the use of other cloud-based services, such as databases, file storage, etc., without the need to directly manage the infrastructure.

Why Choose Serverless

  1. Developers can concentrate on developing application features instead of dealing with server configuration and management. This allows for faster development turnaround and improved team efficiency.
  2. Serverless platforms offer automatic scaling based on the load. This facilitates optimal resource use and guarantees the availability of the app.
  3. It uses a pay-as-you-go system, where clients pay for the time that the functions are running and the resources used, making this model more economical for many projects.

Benefits of Using the Cloud

Flexibility and Scalability

Many businesses are highly seasonable, meaning that infrastructure load increases during peak periods, needing additional computational power. And during the off-season, the hardware will just be idle, especially if you are using on-premises dedicated servers.

Cloud resources can not only be easily scaled up to handle peak loads but also quickly scaled down when it decreases.

Moreover, this flexibility allows for testing various hypotheses without significant capital investments. For instance, instead of purchasing your own GPUs for training ML models, you could rent them to see how these “homebrew” AI experiments align with the desired business logic.

Higher ROI and Lower IT Costs

Return on investment of cloud technologies is quite complex, as it involves a multitude of factors. You can’t accurately assert having a private installation on dedicated servers becomes more cost-effective than using the cloud at around the 3-year to 5-year mark. ROT varies from project to project, depending on the technology stack, infrastructure maintenance teams, the specific market, and how detailed the roadmap is. Nevertheless, cloud-based solutions are preferable for startups due to the lower barrier of entry. Using a provider’s infrastructure cuts down on capital investments (CAPEX), but increases operational expenses (OPEX).

Cloud solutions also enable companies to save on hiring specialists, as some tasks can be delegated to the service provider who is responsible for executing them under the SLA.

Economic Efficiency

Cloud servers have a more accessible entry point due to the lack of upfront costs, thus empowering companies and allowing them to try out several providers and the optimal balance of price, quality, and features offered. For instance, some providers offer daily payment plans or allow for the partitioning of vCPU cores. These solutions are perfect for hosting projects that do not require much in terms of resources or fault tolerance, such as small websites or Telegram bots. Purchasing a dedicated server for this purpose would not be cost-effective while setting up a cloud hosting takes just a couple of minutes.

Leasing computational power also helps avoid infrastructure maintenance costs, since it’s up to the provider to ensure uninterrupted power supply and internet connection, proper ventilation, fire extinguishing systems, and system monitoring.

Quick Deployment

Since providing IT services is the core business of cloud providers their ability to deploy servers and deliver the required services will always be superior compared to companies that do not necessarily focus on these things.

This also applies to implementing new solutions that better tackle business tasks. For example, if a company frequently works with time series data but primarily uses PostgreSQL as its DBMS, TimescaleDB can be used as a platform-based solution with no need for additional database solutions.

Pros and Cons of Cloud Computing

Large organizations prefer CAPEX since owning the actual infrastructure helps contribute to the company’s market capitalization — contrary to operational expenses for cloud computing.

Startups may lack the necessary resources to set up an on-premises private cloud, making it more beneficial to use leased resources.

Difference between Dedicated Servers and the Cloud

We have partially discussed this when talking about scalability, but the main difference between the two lies elsewhere. Dedicated and cloud servers have different approaches to resources. Dedicated servers, much like the private cloud, provide all their resources to one company only. In contrast, public clouds have different clients that occupy adjacent “neighborhoods” of the cloud.

When it comes to on-premises implementation, dedicated servers can be configured and customized for specific tasks.

Perks for the Business and Integration with Business Processes

Cloud computing is easily scalable, making it applicable in a wide range of areas. For instance, if a business needs some ML tools, preliminary training can be done on in-house hardware. Large-scale hardware isn’t needed at this step when it’s necessary to just validate hypotheses and determine the right direction to proceed. If the fundamental hypothesis proves correct, the next step would be to either procure additional equipment or lease computational power from a provider.

Another facet of business process integration is the ability to reserve resources. This is useful for, for instance, highly seasonal businesses, where peak periods cause a surge in demand and. This ensures that the company’s services remain functional.

Minimal Migration Costs

Given that cloud servers are simpler to set up than dedicated ones, migrating with the help of a global provider will be a cheaper, faster, and more seamless experience. In this context, migration refers not to deploying servers but also to setting up connections. Handing things over to providers can also help cut down on costs for server maintenance and server operating conditions.

Why Have a Hard Time Trusting Cloud-Based Services

InfoSec stuff is much more straightforward when you got server racks down in your basement — the data is there and not going anywhere. Questions start to arise when businesses decide to transition to the cloud.

For some companies, moving to the cloud means losing direct control not over a portion of their infrastructure but also their data.

This is often caused by a non-transparent division of responsibilities between the client and the provider, especially in such areas as information security and fault tolerance.

Virtualization and Security of Cloud Service

For the longest time, cloud services were deemed less secure than other models. This was particularly true of the public cloud, as it was believed that customers were improperly isolated from each other and this fact could be exploited by hackers. As both security tools and InfoSec skills have improved (e.g., with pen testing becoming more popular), the security of both dedicated and cloud servers has become somewhat the same.

Infrastructure security measures involve monitoring of external and internal ports, firewalls, and IAM systems, complemented by regular tests and personnel training.

There are some providers that offer basic anti-DDoS protection for free, thanks to advanced traffic analytics tools.

Data Encryption

Data encryption is the process of transforming readable data into a format that is only decipherable for authorized parties via a specific algorithm and an encryption key.

Symmetric encryption is one of the most common methods for data encryption. It uses the same key to both encrypt and decrypt data. This method allows for quick and effective encryption but necessitates a secure exchange of keys between the sender and receiver.

Asymmetric encryption is another method, and it uses a pair of keys: a public one and a private one. The public key is used for encryption, while the private key is used for decryption. This method offers better security as the private key is stored on the recipient’s side.

Data encryption can be implemented at various stages and levels when using cloud technologies. At the application level, data can be encrypted even before it’s sent to the cloud and then decrypted on the recipient’s end. At the storage level, data can be encrypted before being saved to the cloud, thus ensuring its safety in the case of unauthorized access attempts.

Additionally, there are several data encryption protocols and algorithms such as AES (Advanced Encryption Standard) or RSA (Rivest-Shamir-Adleman).

IAM and Access Management

IAM establishes a role system with varying access rights and permissions. This solves several security concerns and establishes a user hierarchy. IAM is particularly useful when dealing with systems that are accessed by engineers and software developers. Say, for example, you need to allow your analytics team to see some data but don’t need them to access the whole project. Thanks to IAM, you can assign read-only permissions to their team and don’t need to worry about any files or configurations being changed, inadvertently or not.

The key to access role management is to give just enough access to do the work tasks, nothing more. This helps safeguard project data and prevent data leaks.

Monitoring

Cloud infrastructure monitoring enables tracking the workload and performance of various cloud components, such as virtual machines, network resources, databases, and more. It provides system status updates, enabling administrators to quickly detect and resolve potential issues or vulnerabilities.

The primary goal of infrastructure monitoring is to maintain stable operations and prevent possible malfunctions or overloads. Monitoring allows tracking resource utilization, memory usage, and data transfer speeds, helping to promptly react to changes and optimize system operation.

InfoSec also benefits from monitoring thanks to early detection and prevention of unauthorized access and various attacks on the servers. Monitoring allows for tracking user activity, analyzing event logs, and identifying unusual behavior.

Key principles of cloud infrastructure monitoring include:

1. Data Gathering. Monitoring is conducted by gathering information on various system performance metrics such as CPU load, memory usage, network traffic, and so on.

2. Data Analysis. The gathered data is analyzed to detect anomalies, issues, or potential vulnerabilities. This might be an automated procedure using specific tools or a manual analysis carried out by an engineer.

3. Notifications. Upon detecting issues or anomalies, monitoring tools can send notifications to administrators or other people. This facilitates a swift response to issues and the rollout of fixes.

4. Visualization. Monitoring can help visualize data in graphs, charts, or even control panels. This enables operators to effortlessly evaluate the current system status and make informed decisions.

5. Automation. Numerous monitoring procedures can be automated using specific tools or management systems. This allows to save time and resources on management and enhances monitoring effectiveness.

Types of Cloud Services

Cloud Servers

This refers to a single server or a group of virtualized servers that offer their computational resources over the network. These can be utilized for data storage, ML computations, managing remote staff, or infrastructure monitoring.

Cloud servers can be quickly reconfigured and scaled up or down, depending on the situation. This is their primary distinction from dedicated servers.

There are 4 types of cloud infrastructure implementation:

  1. Public Cloud. Public cloud solutions are always offered by a service provider. In this model, every client has the ability to create virtual machines with no limitations but lacks physical access to the servers themselves. Components can be hot-swapped out at any given moment. This comes in handy when the workload fluctuates: RAM sticks or vCPUs can be easily added or removed. The provider sets up all of the infrastructure, up to the guest operating systems, which is why higher degrees of customization are impossible.
  2. Private Cloud. There are two ways to implement a private cloud: using a provider’s infrastructure or on-premises, with in-house hardware. Its key differentiating factor is that all computing resources belong to a single client. This solution is best suited for companies that need to install their own OS and software/hardware, such as dedicated firewalls.
  3. Hybrid Cloud. This type links a part of the client’s infrastructure (on-premises) to the infrastructure of the service provider. Companies receive several benefits (like security certificates, attestation, etc.) and additional resources from providers, while still making use of their own capabilities.
  4. Multicloud. This type presupposes a company using multiple cloud services from various providers. It’s commonly used for building fault-tolerant services or for optimizing expenses. For instance, the core project might be hosted by one provider, while the data storage for training ML models is hosted by another — this solution can be more cost-effective even despite the more complex integration.

How public and private cloud solutions compare:

Object Storage

This service is intended for storing unstructured and semi-structured data (like data sorted in alphabetical order). The storage is highly scalable and can feature billions of objects — an aspect that is very important for compiling ML models. It also allows incorporating storage into the project’s infrastructure. However, you should check whether the provider offers some of the must-have modern services, such as S3 replication and support for FTP/FTPS, and SFTP protocols.

Backups and BaaS

There’s a common misconception that companies always get infrastructure backups along with hosting. Backup Creation and Backup as a Service (BaaS) are two distinct services, where BaaS is a dedicated cloud service that helps structure the workflow and plan resource usage. Backups need additional storage space, physical room for backup servers, specialized software, and take up the work hours for administrators. The more data to back up, the higher the expenses. That’s why companies frequently opt for a cloud-based “turnkey” solution when it comes to backups.

BaaS allows clients to schedule backups and offers a mix of data recovery techniques that can help maintain infrastructure with minimal downtime.

Cloud Databases

Besides the well-known solutions such as MySQL and PostgreSQL, providers typically offer additional options for data management. The provider is responsible for choosing the best hardware and setting it up. The same can be said of cluster deployment and workflow for recovering master databases. Some providers choose to create a new master from a replica selected by the quorum, while others prefer to get the master back up, as it may take less time than transferring all the data.

Home/Blog/Articles/What is Cloud Computing? Types of Cloud-Based Technologies and Services
Join our Newsletter
We’ll keep you on the loop with everything going on in clouds and servers