Term | Definition | Category |
---|---|---|
Cloud Computing | The delivery of computing services—including servers, storage, databases, networking, software, analytics, and intelligence—over the internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. Users typically pay only for the cloud services they use, helping lower operating costs and run infrastructure more efficiently. | Core |
Public Cloud | Computing services offered by third-party providers over the public internet, making them available to anyone who wants to use or purchase them. Public cloud resources are shared across multiple organizations (multi-tenant environment). Major public cloud providers include AWS, Microsoft Azure, and Google Cloud Platform. | Deployment Model |
Private Cloud | Cloud computing resources used exclusively by a single business or organization. A private cloud can be physically located on the company’s on-site datacenter or hosted by a third-party service provider. All services and infrastructure are maintained on a private network, providing greater control, security, and customization. | Deployment Model |
Hybrid Cloud | A computing environment that combines a public cloud and a private cloud by allowing data and applications to be shared between them. This gives businesses greater flexibility, more deployment options, and helps optimize existing infrastructure, security, and compliance while allowing workloads to move between environments as needs and costs change. | Deployment Model |
Multi-Cloud | The use of services from multiple public cloud providers simultaneously (such as AWS, Azure, and GCP) for different workloads or applications. This approach helps organizations avoid vendor lock-in, optimize costs, increase redundancy, and leverage best-in-class services from different providers. | Deployment Model |
Community Cloud | A collaborative cloud environment shared among several organizations with common concerns such as security, compliance, or jurisdiction. Community clouds are less common but serve organizations with similar requirements, such as government agencies, healthcare providers, or financial institutions. | Deployment Model |
Infrastructure as a Service (IaaS) | A cloud computing service model that delivers fundamental compute, network, and storage resources on-demand, over the internet, on a pay-as-you-go basis. IaaS gives users direct control over operating systems, storage, and deployed applications while providing the highest level of flexibility and management control. Examples include AWS EC2, Azure Virtual Machines, and Google Compute Engine. | Service Model |
Platform as a Service (PaaS) | A cloud computing service model that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app. PaaS offerings include development tools, database management systems, and business analytics. Examples include AWS Elastic Beanstalk, Azure App Service, Google App Engine, and Heroku. | Service Model |
Software as a Service (SaaS) | A cloud computing service model that delivers software applications over the internet, on a subscription basis. With SaaS, cloud providers host and manage the software application and underlying infrastructure, handling maintenance like software upgrades and security patching. Users connect to the application over the internet, usually with a web browser. Examples include Microsoft 365, Google Workspace, Salesforce, and Dropbox. | Service Model |
Function as a Service (FaaS) | A cloud computing service model that enables developers to build, run, and manage application functionalities without the complexity of maintaining infrastructure. FaaS is an implementation of serverless architecture where applications are broken down into individual functions that run in response to events. Examples include AWS Lambda, Azure Functions, and Google Cloud Functions. | Service Model |
Database as a Service (DBaaS) | A cloud computing service model that provides users with access to a database without the need to set up hardware, install software, or configure the database. The cloud provider manages the provisioning, scaling, backup, and maintenance of the database. Examples include Amazon RDS, Azure SQL Database, and Google Cloud SQL. | Service Model |
Backend as a Service (BaaS) | A cloud computing service model that provides developers with ways to connect their web and mobile applications to cloud services via APIs and SDKs. BaaS offerings typically include features like user authentication, database management, remote updates, push notifications, and cloud storage. Examples include Firebase, AWS Amplify, and Azure Mobile Apps. | Service Model |
Elasticity | The ability of a cloud service to automatically increase or decrease resources allocated to a service based on current demand. Elasticity allows applications to scale up or down dynamically, ensuring optimal resource utilization and cost efficiency while maintaining performance under varying workload conditions. | Core Concept |
Scalability | The ability of a system to handle growing amounts of work by adding resources to the system. In cloud computing, scalability can be achieved horizontally (adding more machines) or vertically (adding more power to existing machines), enabling applications and services to accommodate increased demand without performance degradation. | Core Concept |
Virtualization | The process of creating a software-based (virtual) representation of physical resources like servers, storage, networks, and operating systems. Virtualization allows multiple virtual machines to run on a single physical machine’s CPU, dividing the physical resource into multiple logical or virtual resources, essential for efficient cloud infrastructure. | Core Concept |
Containerization | A lightweight alternative to full virtualization that involves encapsulating an application and its dependencies (libraries, binaries, configuration files) into a self-contained unit called a container. Containers are portable across different computing environments, start quickly, and use resources efficiently. Docker and Kubernetes are popular containerization technologies. | Core Concept |
Serverless Computing | A cloud execution model where the cloud provider dynamically manages the allocation and provisioning of servers. Applications are broken down into individual functions that run when triggered by an event. Serverless architectures allow developers to focus on code without worrying about infrastructure, with automatic scaling and pay-for-execution pricing. AWS Lambda, Azure Functions, and Google Cloud Functions are examples. | Core Concept |
Cloud-Native | An approach to building and running applications that fully exploits the advantages of the cloud computing model. Cloud-native applications are designed specifically for cloud environments, typically using microservices architecture, containers, dynamic orchestration, and continuous delivery. These applications are built to thrive in the elastic, distributed, and highly available nature of modern cloud platforms. | Core Concept |
API (Application Programming Interface) | A set of definitions, protocols, and tools for building software that specify how software components should interact. In cloud computing, APIs enable different services and applications to communicate with each other. RESTful APIs are commonly used to integrate cloud services, allowing developers to access and combine functionality from multiple providers. | Core Concept |
Multi-tenancy | An architecture where a single instance of software serves multiple customers (tenants). Each tenant’s data is isolated and remains invisible to other tenants. Multi-tenancy allows cloud providers to share resources efficiently across many users while maintaining privacy and security, contributing to the cost-effectiveness of cloud computing. | Core Concept |
Object Storage | A storage architecture that manages data as objects (as opposed to files or blocks) with metadata and unique identifiers. Object storage is ideal for storing unstructured data like images, videos, backups, and web content. It offers virtually unlimited scalability, high durability, and low cost. Examples include Amazon S3, Azure Blob Storage, and Google Cloud Storage. | Storage |
Block Storage | A storage architecture that manages data as fixed-sized blocks. Block storage provides low-latency access to data and is often used for applications requiring consistent I/O performance like databases or enterprise applications. Cloud block storage can be attached to virtual machines like physical hard drives. Examples include Amazon EBS, Azure Disk Storage, and Google Persistent Disks. | Storage |
File Storage | A storage architecture that organizes and represents data as a hierarchy of files and folders, accessed through standard file access protocols. Cloud file storage is ideal for shared content, application development, and workloads requiring file system features. Examples include Amazon EFS, Azure Files, and Google Filestore. | Storage |
Content Delivery Network (CDN) | A distributed network of servers that delivers web content and other media to users based on their geographic location, optimizing for speed and availability. CDNs cache content at edge locations around the world, reducing latency and bandwidth usage while improving user experience. Examples include Amazon CloudFront, Azure CDN, Google Cloud CDN, and Cloudflare. | Storage |
Storage Classes/Tiers | Different levels of storage with varying performance, availability, and cost characteristics. Cloud providers offer multiple storage tiers such as hot (frequently accessed), cool (infrequently accessed), and archive (rarely accessed) storage, allowing organizations to balance performance needs with cost considerations based on data access patterns. | Storage |
Data Lake | A centralized repository that allows storing structured and unstructured data at any scale. Unlike hierarchical data warehouses, data lakes store data in its raw format without having to first structure it. Cloud-based data lakes provide scalable storage and analytics capabilities for big data analytics, machine learning, and data discovery. Examples include AWS Lake Formation, Azure Data Lake Storage, and Google Cloud Storage as part of their data lake solutions. | Storage |
Virtual Machine (VM) | A software emulation of a physical computer that runs an operating system and applications. In cloud computing, VMs are created on physical servers in data centers, enabling multiple virtual servers to run on a single physical server. Cloud providers offer various VM types optimized for different workloads. Examples include Amazon EC2 instances, Azure Virtual Machines, and Google Compute Engine instances. | Compute |
Container | A lightweight, standalone, executable software package that includes everything needed to run an application: code, runtime, system tools, libraries, and settings. Containers isolate applications from their environment, ensuring they work uniformly despite differences in infrastructure. Cloud container services provide managed environments for deploying and operating containerized applications. Examples include Amazon ECS/EKS, Azure Container Instances/AKS, and Google Kubernetes Engine. | Compute |
Auto Scaling | A feature that automatically adjusts the number of compute resources based on the current demand for the application. Auto scaling helps maintain application availability and allows organizations to scale compute capacity up or down automatically according to conditions defined by the user, optimizing performance and cost. Examples include AWS Auto Scaling, Azure Autoscale, and Google Cloud Autoscaler. | Compute |
Instance | A virtual server in the cloud. An instance is a single virtual machine that runs a specified operating system and applications. Cloud instances can be provisioned with various combinations of CPU, memory, storage, and networking capacity to meet different workload requirements. Examples include EC2 instances, Azure VMs, and Google Compute Engine instances. | Compute |
Bare Metal Server | A physical server dedicated to a single tenant without virtualization or hypervisor layer. Bare metal servers provide direct access to physical hardware resources, delivering maximum performance, security, and control. They are ideal for workloads requiring high performance, specialized hardware, or specific compliance requirements. Examples include AWS Bare Metal instances, IBM Cloud Bare Metal, and Oracle Bare Metal Cloud. | Compute |
GPU Instance | Cloud virtual machines equipped with Graphics Processing Units (GPUs) to accelerate specific workloads like machine learning, deep learning, scientific computing, and 3D visualization. GPU instances provide massive parallel processing capabilities for compute-intensive applications. Examples include AWS EC2 P4/G4 instances, Azure N-series VMs, and Google Cloud GPU instances. | Compute |
Relational Database Service | Managed cloud services that provide relational databases without requiring users to manage the underlying infrastructure. These services handle routine database tasks such as setup, patching, backups, and high availability, allowing users to focus on application development. They support traditional RDBMS engines like MySQL, PostgreSQL, SQL Server, and Oracle. Examples include Amazon RDS, Azure SQL Database, and Google Cloud SQL. | Database |
NoSQL Database | Non-relational database systems designed for distributed data stores with large-scale data storage needs. NoSQL databases use various data models including document, key-value, wide-column, and graph formats. They provide flexible schemas and horizontal scalability suited for big data and real-time web applications. Cloud NoSQL services include Amazon DynamoDB, Azure Cosmos DB, and Google Cloud Firestore/Bigtable. | Database |
In-Memory Database | Database systems that store data primarily in memory (RAM) rather than on disk, delivering extremely fast performance for applications requiring quick data access. In-memory databases are used for caching, session management, real-time analytics, and high-speed transactions. Cloud in-memory database services include Amazon ElastiCache, Azure Cache for Redis, and Google Cloud Memorystore. | Database |
Data Warehouse | A large-scale repository optimized for analytics and reporting, storing current and historical data from multiple sources in a structured format. Cloud data warehouses provide scalable analytics capabilities without the need to manage infrastructure, supporting business intelligence activities and data-driven decision making. Examples include Amazon Redshift, Azure Synapse Analytics, Google BigQuery, and Snowflake. | Analytics |
Big Data Processing | Cloud services designed to process and analyze extremely large datasets that traditional data processing applications cannot handle. These services provide managed environments for big data frameworks like Hadoop, Spark, and Flink, enabling organizations to derive insights from massive amounts of structured and unstructured data. Examples include Amazon EMR, Azure HDInsight, and Google Dataproc. | Analytics |
Stream Processing | Services that enable real-time processing of continuous data streams from sources like IoT devices, logs, or application events. Stream processing allows organizations to analyze and act on data as it’s being generated rather than after it’s stored. Cloud stream processing services include Amazon Kinesis, Azure Stream Analytics, and Google Cloud Dataflow. | Analytics |
Virtual Private Cloud (VPC) | A logically isolated section of the cloud where users can launch resources in a virtual network they define. VPCs allow users to control network settings like IP address ranges, subnets, route tables, and network gateways, providing a secure and customizable network environment for cloud resources. Examples include Amazon VPC, Azure Virtual Network, and Google Cloud VPC. | Networking |
Load Balancer | A service that distributes incoming network traffic across multiple servers to ensure no single server becomes overwhelmed, improving application responsiveness and availability. Cloud load balancers can handle varying workloads and automatically scale to meet demand. They can operate at different network layers (application or transport) and across multiple zones. Examples include AWS Elastic Load Balancing, Azure Load Balancer, and Google Cloud Load Balancing. | Networking |
Cloud DNS | Managed Domain Name System (DNS) services that translate human-readable domain names into IP addresses. Cloud DNS services provide highly available and scalable domain name resolution with low latency, enabling users to route traffic to various cloud resources. Examples include Amazon Route 53, Azure DNS, and Google Cloud DNS. | Networking |
VPN (Virtual Private Network) | A secure connection that extends a private network across a public network like the internet. Cloud VPN services create encrypted tunnels between on-premises networks and cloud VPCs, allowing for secure communication between environments. This enables hybrid cloud architectures where applications can span both cloud and on-premises resources. Examples include AWS VPN, Azure VPN Gateway, and Google Cloud VPN. | Networking |
Direct Connect/Express Route | Dedicated network connections between on-premises data centers and cloud providers that bypass the public internet. These connections provide consistent network performance, reduced latency, increased bandwidth, and enhanced security compared to internet-based connections. Examples include AWS Direct Connect, Azure ExpressRoute, and Google Cloud Interconnect. | Networking |
Network Security Group (NSG) | A virtual firewall for controlling inbound and outbound network traffic to cloud resources. NSGs contain rules that allow or deny traffic based on protocol, port, and source/destination IP addresses. They provide an important security layer for resources within virtual networks. Examples include AWS Security Groups, Azure Network Security Groups, and Google Cloud Firewall Rules. | Networking |
Identity and Access Management (IAM) | A framework of policies and technologies for ensuring that the right users have the appropriate access to resources. Cloud IAM services allow organizations to manage users, groups, roles, and their permissions across cloud resources, enforcing the principle of least privilege. Examples include AWS IAM, Azure Active Directory/IAM, and Google Cloud IAM. | Security |
Key Management Service (KMS) | Managed services for creating and controlling encryption keys used to encrypt data. Cloud KMS solutions provide centralized control over the entire lifecycle of cryptographic keys, enabling users to encrypt data across cloud services. They offer secure key storage, rotation policies, and audit capabilities. Examples include AWS KMS, Azure Key Vault, and Google Cloud KMS. | Security |
Single Sign-On (SSO) | An authentication process that allows users to access multiple applications with one set of credentials. Cloud SSO services integrate with enterprise identity providers and cloud applications, streamlining user access while maintaining security. Examples include AWS Single Sign-On, Azure Active Directory, and Google Cloud Identity. | Security |
Multi-Factor Authentication (MFA) | A security mechanism that requires users to provide two or more verification factors to gain access to resources. Cloud MFA implementations add an extra layer of security beyond passwords, requiring something the user knows (password), has (mobile device or key), or is (biometric). Most cloud providers offer MFA options for accessing management consoles and resources. | Security |
Cloud Security Posture Management (CSPM) | Tools and services that continuously monitor cloud environments for compliance with security policies, detecting misconfigurations and vulnerabilities. CSPM solutions help organizations maintain security best practices across complex cloud deployments. Examples include AWS Security Hub, Azure Security Center, and Google Cloud Security Command Center. | Security |
Web Application Firewall (WAF) | A security solution that protects web applications from common web exploits and attacks such as SQL injection, cross-site scripting, and denial-of-service. Cloud WAFs filter and monitor HTTP traffic between web applications and the internet, protecting applications from malicious activities. Examples include AWS WAF, Azure Front Door WAF, and Google Cloud Armor. | Security |
Cloud Management Platform | Tools and interfaces for administering cloud resources across providers and services. Cloud management platforms provide unified visibility, automation, governance, and cost management functions for complex multi-cloud environments. They help organizations standardize operations and enforce policies consistently. Examples include cloud-native tools like AWS Management Console, Azure Portal, and Google Cloud Console, as well as third-party platforms. | Management |
Infrastructure as Code (IaC) | The practice of managing and provisioning infrastructure through code instead of manual processes. IaC allows for consistent, repeatable deployment of cloud resources using declarative configuration files. This approach treats infrastructure provisioning like software development, enabling version control, testing, and automation. Popular IaC tools include AWS CloudFormation, Azure Resource Manager templates, Google Cloud Deployment Manager, Terraform, and Pulumi. | Management |
Monitoring and Observability | Services that track the performance, availability, and operation of cloud resources and applications. Cloud monitoring solutions collect metrics, logs, traces, and events, providing insights into system behavior, troubleshooting assistance, and automated alerting. Examples include Amazon CloudWatch, Azure Monitor, and Google Cloud Monitoring. | Management |
Cost Management | Tools and services for monitoring, analyzing, optimizing, and controlling cloud spending. Cloud cost management solutions provide visibility into resource usage and expenditures, helping organizations identify waste, forecast costs, and implement budgetary controls. Examples include AWS Cost Explorer, Azure Cost Management, and Google Cloud Cost Management. | Management |
Cloud Service Broker (CSB) | An entity that manages the relationship between cloud service providers and consumers. CSBs help organizations select, deploy, and manage cloud services across multiple providers, offering intermediation, aggregation, and integration services. They can be software platforms, managed service providers, or internal IT teams that streamline multi-cloud operations. | Management |
Configuration Management | The process of establishing and maintaining consistent settings for systems and software across a cloud environment. Cloud configuration management tools automate the provisioning and management of resources according to defined specifications, ensuring consistency and compliance. Examples include tools like AWS Config, Azure Policy, and Google Cloud Asset Inventory, as well as third-party solutions like Ansible, Chef, and Puppet. | Management |
CI/CD (Continuous Integration/Continuous Delivery) | A set of practices and tools for automating the software delivery process. CI/CD pipelines automatically build, test, and deploy code changes to production environments, enabling frequent and reliable software releases. Cloud-based CI/CD services eliminate the need to maintain build servers and provide seamless integration with cloud resources. Examples include AWS CodePipeline, Azure DevOps, and Google Cloud Build. | DevOps |
Container Orchestration | The automated arrangement, coordination, and management of containers. Container orchestration tools handle deployment, scaling, networking, and availability of containerized applications. Kubernetes has emerged as the de facto standard for container orchestration, with most cloud providers offering managed Kubernetes services like Amazon EKS, Azure AKS, and Google GKE. | DevOps |
Microservices | An architectural style that structures an application as a collection of loosely coupled, independently deployable services. Microservices in cloud environments leverage container technologies, orchestration tools, and managed services to create flexible, scalable applications. This approach allows teams to develop, deploy, and scale parts of the application independently. | DevOps |
Blue-Green Deployment | A release technique that maintains two identical production environments, with only one active at a time. The “blue” environment runs the current production version while the “green” environment contains the new release. After testing the green environment, traffic is switched from blue to green, minimizing downtime and risk. Cloud platforms facilitate blue-green deployments through load balancing and traffic routing capabilities. | DevOps |
Canary Deployment | A deployment strategy where a new version of an application is gradually rolled out to a small subset of users before full deployment. This approach allows for real-world testing and monitoring of the new version while limiting potential negative impacts. Cloud load balancers and traffic managers enable fine-grained control over the percentage of traffic sent to different versions. | DevOps |
GitOps | An operational framework that applies DevOps best practices for infrastructure automation using Git as a single source of truth. With GitOps, infrastructure and application configurations are stored as code in Git repositories, and changes to the environment are made through pull requests and automated deployments. This approach is particularly popular for managing Kubernetes environments in the cloud. | DevOps |
Machine Learning as a Service (MLaaS) | Cloud-based platforms that provide machine learning tools and infrastructure as a service. MLaaS offerings include pre-trained models, model training frameworks, data processing, and inference services, making AI capabilities accessible without requiring specialized hardware or extensive expertise. Examples include Amazon SageMaker, Azure Machine Learning, and Google Cloud AI Platform. | AI/ML |
Pre-trained AI Models | Ready-to-use artificial intelligence models provided by cloud services that have been trained on large datasets for common tasks. These include models for computer vision, natural language processing, speech recognition, and recommendation systems. Pre-trained models allow developers to implement AI capabilities without having to train models from scratch. Examples include AWS AI Services, Azure Cognitive Services, and Google Cloud AI APIs. | AI/ML |
AutoML | Cloud services that automate the process of applying machine learning to real-world problems. AutoML tools handle complex steps in the ML pipeline including feature engineering, algorithm selection, hyperparameter tuning, and model optimization, making machine learning accessible to users with limited data science expertise. Examples include Amazon SageMaker Autopilot, Azure Automated ML, and Google Cloud AutoML. | AI/ML |
Neural Networks as a Service | Cloud platforms that provide specific infrastructure and frameworks for training and deploying neural network models. These services typically offer GPU or TPU acceleration and specialized frameworks for deep learning workloads. They enable organizations to build advanced AI solutions without investing in expensive hardware. Examples include AWS Deep Learning AMIs, Azure Machine Learning with GPU clusters, and Google Cloud AI Platform with TPUs. | AI/ML |
AI Infrastructure | Cloud-based hardware resources specifically optimized for artificial intelligence workloads. These include GPU instances, TPU (Tensor Processing Unit) accelerators, and optimized virtual machines designed for the computational demands of training and running deep learning models. AI infrastructure services provide the raw computing power needed for sophisticated machine learning applications at scale. | AI/ML |
MLOps | A set of practices that combines Machine Learning, DevOps, and Data Engineering to deploy and maintain ML systems in production reliably and efficiently. Cloud MLOps services provide tools for version control of datasets and models, automated testing, continuous deployment, monitoring, and governance of ML workflows. Examples include AWS SageMaker MLOps, Azure Machine Learning MLOps, and Google Cloud MLOps solutions. | AI/ML |
Internet of Things (IoT) Platform | Cloud services designed to connect, manage, and analyze data from IoT devices at scale. IoT platforms provide capabilities for device registration, secure communication, data ingestion, storage, real-time analytics, and integration with other cloud services. They form the backbone of IoT solutions, bridging physical devices with digital systems. Examples include AWS IoT, Azure IoT Hub, and Google Cloud IoT. | IoT |
Edge Computing | A distributed computing paradigm that brings computation and data storage closer to the location where it is needed. In cloud contexts, edge computing extends cloud capabilities to the network edge, reducing latency, bandwidth usage, and enabling disconnected operation. Cloud providers offer edge computing services that deploy cloud functionality to edge locations or integrate with on-premises edge devices. Examples include AWS Outposts/Wavelength, Azure Edge Zones, and Google Anthos for the edge. | Edge Computing |
Digital Twin | A virtual representation of a physical object, process, or system that serves as a real-time digital counterpart. Cloud-based digital twin platforms integrate IoT data with simulation and analytics capabilities, enabling monitoring, analysis, and optimization of physical assets. Examples include Azure Digital Twins, AWS IoT TwinMaker, and GE Predix (on various cloud platforms). | IoT |
IoT Analytics | Cloud services specialized in processing, analyzing, and deriving insights from IoT device data. These services handle the unique challenges of IoT data, including high volume, velocity, and variety. IoT analytics platforms typically include capabilities for data ingestion, storage, processing, and visualization designed for time-series and real-time data. Examples include AWS IoT Analytics, Azure Time Series Insights, and Google Cloud IoT Core with BigQuery. | IoT |
Edge AI | The deployment of artificial intelligence applications at the edge of the network, close to where data is created. Cloud providers offer tools for developing, training, and deploying machine learning models to edge devices, enabling AI inference with low latency without constant cloud connectivity. Examples include AWS IoT Greengrass ML, Azure IoT Edge with ML, and Google Edge TPU. | Edge Computing |
Fog Computing | An extension of cloud computing that brings processing capabilities closer to the data source in a layer between edge devices and central cloud data centers. Fog computing distributes computing, storage, and networking services along the cloud-to-thing continuum, providing processing at the most efficient location. It combines elements of both cloud and edge computing to optimize system performance, especially for IoT applications. | Edge Computing |
Amazon Web Services (AWS) | A comprehensive cloud platform launched by Amazon in 2006, offering over 200 fully featured services from data centers globally. AWS provides IaaS, PaaS, and SaaS offerings covering compute, storage, databases, networking, analytics, machine learning, IoT, security, and enterprise applications. Its core services include EC2 (compute), S3 (storage), RDS (database), and Lambda (serverless). AWS is recognized for its breadth of services, mature ecosystem, and extensive global infrastructure. | Provider |
Microsoft Azure | Microsoft’s cloud computing platform launched in 2010, offering a wide range of cloud services across IaaS, PaaS, and SaaS models. Azure is known for its integration with Microsoft’s enterprise software (like Windows Server, SQL Server, and Active Directory), hybrid cloud capabilities, and strong enterprise focus. Key services include Azure Virtual Machines, Azure SQL Database, Azure Functions, and Microsoft 365. Azure has strong positions in hybrid deployments and enterprise solutions. | Provider |
Google Cloud Platform (GCP) | Google’s cloud computing platform that leverages the same infrastructure used for Google’s own products like Search and YouTube. Launched in 2008, GCP offers a range of cloud services including compute, storage, databases, networking, big data, machine learning, and analytics. Notable services include Compute Engine, Cloud Storage, BigQuery, and Kubernetes Engine. GCP is recognized for its strengths in data analytics, machine learning, and container technologies. | Provider |
IBM Cloud | IBM’s integrated cloud platform offering both public and private cloud options, with particular strength in hybrid cloud solutions. IBM Cloud (formerly Bluemix) provides IaaS, PaaS, and SaaS offerings with a focus on enterprise workloads, AI capabilities through Watson, and industry-specific solutions. IBM Cloud is known for its robust security features, bare metal server options, and enterprise-grade reliability. | Provider |
Oracle Cloud Infrastructure (OCI) | Oracle’s cloud platform designed to run enterprise applications and databases, emphasizing high performance and security. OCI offers compute, storage, networking, database, and platform services with a particular focus on running Oracle workloads like Oracle Database and applications. Oracle Cloud is known for its dedicated hardware options, autonomous database services, and integrated enterprise application suite. | Provider |
Alibaba Cloud | The cloud computing arm of Chinese e-commerce giant Alibaba Group, also known as Aliyun. Launched in 2009, Alibaba Cloud offers a full suite of cloud services including elastic computing, storage, database, networking, and AI capabilities. It has a dominant market position in China and significant presence throughout Asia-Pacific. Alibaba Cloud provides particular advantages for companies doing business in China due to its local data centers and compliance with Chinese regulations. | Provider |
Cloud Compliance | The process of ensuring cloud operations and infrastructure adhere to relevant industry standards, regulations, and security frameworks. Cloud compliance encompasses data protection, privacy, sovereignty, audit capabilities, and security control implementation. Common cloud compliance frameworks include GDPR, HIPAA, PCI DSS, SOC 2, ISO 27001, and FedRAMP. Cloud providers offer various certifications and features to help customers meet their compliance obligations. | Compliance |
Cloud Security Alliance (CSA) | A not-for-profit organization that promotes best practices for providing security assurance within cloud computing environments. CSA offers guidance, education, and the Security, Trust, Assurance, and Risk (STAR) registry for cloud provider security. Their Cloud Controls Matrix (CCM) provides a framework for assessing cloud security risks and the Consensus Assessments Initiative Questionnaire (CAIQ) offers standardized questions for evaluating cloud providers. | Compliance |
Shared Responsibility Model | A security framework that delineates which security aspects are handled by the cloud provider versus the customer. In this model, providers are typically responsible for security “of” the cloud (infrastructure, hardware, networking), while customers are responsible for security “in” the cloud (data, applications, access management). The specific boundaries of responsibility vary by service model (IaaS, PaaS, SaaS) and provider. | Compliance |
Data Sovereignty | The concept that data is subject to the laws and governance of the country in which it is physically stored. Data sovereignty requirements dictate where cloud data can be located and how it can be transferred across borders. Cloud providers address these requirements through regional data centers, data residency controls, and compliance with local regulations like GDPR in Europe or data localization laws in countries like Russia, China, and India. | Compliance |
Cloud Access Security Broker (CASB) | Security enforcement points between cloud service consumers and providers that apply security policies to cloud services. CASBs provide visibility into cloud usage, data security, compliance monitoring, and threat protection for cloud environments. They can operate in proxy or API-based modes to monitor and control cloud access and data transfers. Examples include Microsoft Defender for Cloud Apps, Netskope, and Zscaler. | Security |
Cloud Governance | The framework of policies, procedures, and standards that ensure cloud resources are used effectively, securely, and in alignment with business objectives. Cloud governance encompasses access controls, cost management, compliance monitoring, resource consistency, and operational oversight. Cloud providers offer governance tools like AWS Organizations, Azure Policy, and Google Organization Policy Service to implement governance at scale. | Compliance |
Term | Definition | Category |
---|---|---|
Cloud-Native Security | Security approaches specifically designed for cloud-native applications and infrastructure. Cloud-native security embraces principles like immutable infrastructure, microservices protection, API security, and DevSecOps practices. It focuses on securing containers, serverless functions, service meshes, and ephemeral workloads that are common in modern cloud architectures. | Security |
Zero Trust Architecture | A security model that assumes no user or system should be inherently trusted, whether inside or outside the network perimeter. In cloud environments, Zero Trust implements principles like “never trust, always verify,” least privilege access, micro-segmentation, and continuous authentication and authorization. Cloud providers increasingly offer native services to support Zero Trust implementation across their platforms. | Security |
FinOps (Cloud Financial Operations) | An operational framework and cultural practice that brings financial accountability to cloud spending. FinOps combines systems, best practices, and culture to increase an organization’s ability to understand and optimize cloud costs. The approach involves collaboration between finance, technology, and business teams to make data-driven decisions about cloud resource usage and spending. | Management |
Service Mesh | A dedicated infrastructure layer for controlling service-to-service communication within cloud-native applications. Service meshes abstract network functionalities from application code, providing capabilities like traffic management, security, and observability for microservices. Popular service mesh implementations include Istio, Linkerd, and AWS App Mesh, which integrate with container orchestration platforms like Kubernetes in cloud environments. | Networking |
Chaos Engineering | The practice of intentionally introducing failures in a system to test its resilience and identify weaknesses. In cloud environments, chaos engineering involves deliberately terminating instances, degrading network performance, or simulating infrastructure failures to ensure systems can withstand disruptions. Cloud platforms support chaos engineering through APIs that allow programmatic control of resources and fault injection. | DevOps |
Event-Driven Architecture | A software design pattern where the flow of the program is determined by events such as user actions, sensor outputs, or system messages. Cloud event-driven architectures leverage managed services like AWS EventBridge, Azure Event Grid, or Google Cloud Pub/Sub to create loosely coupled systems that react to events asynchronously. This approach is particularly well-suited to serverless applications in the cloud. | Architecture |
DataOps | An automated, process-oriented methodology to improve the quality and reduce the cycle time of data analytics. DataOps in the cloud leverages automation, orchestration, and monitoring to streamline the entire data lifecycle from ingestion through processing, analysis, and visualization. Cloud platforms offer integrated tools for building DataOps pipelines that connect various data services. | Data Management |
AIOps | Artificial Intelligence for IT Operations – the application of AI and machine learning to automate and enhance IT operations. In cloud environments, AIOps platforms analyze the massive volumes of data generated by infrastructure, applications, and monitoring tools to detect patterns, predict issues, and automate responses. Cloud providers increasingly embed AIOps capabilities within their monitoring and management services. | Management |
Quantum Computing as a Service | Cloud services that provide access to quantum computing resources and simulators. These emerging services allow organizations to experiment with quantum algorithms and applications without investing in quantum hardware. Examples include Amazon Braket, Azure Quantum, and Google Quantum AI, which offer access to different quantum technologies and development frameworks. | Emerging Technology |
Confidential Computing | A cloud security technology that protects data in use by performing computation in a hardware-based Trusted Execution Environment (TEE). Confidential computing ensures that data remains encrypted even during processing, protecting it from privileged users, cloud providers, and potential attackers with access to the underlying infrastructure. Cloud providers offer confidential computing options such as AWS Nitro Enclaves, Azure Confidential Computing, and Google Cloud Confidential Computing. | Security |
Term | Definition | Category |
---|---|---|
Hybrid Cloud Management | Tools and practices for unified management of resources across public cloud, private cloud, and on-premises environments. Hybrid cloud management solutions provide consistent governance, operations, and visibility across heterogeneous infrastructure. These platforms help synchronize configurations, policies, security controls, and workload deployment across hybrid environments. Examples include Azure Arc, AWS Outposts, Google Anthos, and IBM Cloud Satellite. | Hybrid Cloud |
Cloud Bursting | A hybrid cloud deployment model where an application runs in a private cloud or data center and “bursts” into a public cloud when demand for computing capacity spikes. Cloud bursting allows organizations to maintain normal workloads on-premises while leveraging public cloud resources for peak periods, providing cost-effective scalability without migrating the entire application to the cloud. | Hybrid Cloud |
Multi-Cloud Management Platform | Software solutions that provide unified control and visibility across multiple cloud providers. These platforms abstract provider-specific differences to offer consistent workload management, security policies, cost optimization, and resource governance across diverse cloud environments. They help organizations avoid vendor lock-in while leveraging the best services from different providers. Examples include VMware Cloud Foundation, HashiCorp tools, and Morpheus Data. | Multi-Cloud |
Cloud Arbitrage | A strategic approach to multi-cloud deployment that involves selecting cloud services based on pricing, performance, or feature advantages at a given time. Cloud arbitrage leverages price and performance differences between providers to optimize resource allocation, potentially switching workloads between clouds to maximize efficiency and minimize costs. | Multi-Cloud |
Cross-Cloud Networking | Network architectures and services that connect resources across multiple cloud environments. Cross-cloud networking solutions provide secure, reliable connectivity between workloads running in different clouds, enabling data transfer, application communication, and consistent network policies. Examples include multi-cloud service mesh implementations, cloud network virtualization platforms, and software-defined WAN solutions that integrate multiple cloud providers. | Multi-Cloud |
Cloud Center of Excellence (CCoE) | A cross-functional team responsible for developing and managing cloud strategy, governance, and best practices within an organization. The CCoE establishes standards for cloud adoption, architecture, security, and operations, often governing hybrid and multi-cloud environments. This central team accelerates cloud adoption while ensuring consistency, efficiency, and compliance across the organization’s cloud initiatives. | Governance |
Cloud Provider Native Services | Proprietary services developed by cloud providers that aren’t directly portable to other clouds. These services often offer advanced capabilities optimized for the provider’s infrastructure but can create vendor lock-in concerns in multi-cloud strategies. Examples include AWS DynamoDB, Azure Cosmos DB, or Google BigQuery, which have unique features not directly matched on other platforms. | Multi-Cloud |
Cloud-Agnostic Approach | A development and deployment strategy that avoids dependency on provider-specific services, allowing applications to run on any cloud with minimal modification. Cloud-agnostic designs prioritize portability and interoperability by using standardized technologies like containers, Kubernetes, and open APIs rather than proprietary services. This approach provides flexibility for multi-cloud deployments but may sacrifice some advanced cloud-native capabilities. | Multi-Cloud |
Distributed Cloud | The distribution of public cloud services to different physical locations while the operation, governance, and evolution remain the responsibility of the public cloud provider. Distributed cloud extends the provider’s infrastructure to customer data centers, edge locations, or other cloud providers’ regions while maintaining a unified control plane. This model combines the benefits of public cloud with the performance and regulatory advantages of local infrastructure. Examples include AWS Outposts, Azure Stack, and Google Distributed Cloud. | Hybrid Cloud |
On-Premises Cloud | Cloud infrastructure and services deployed within an organization’s own data center, bringing cloud-like capabilities to on-premises environments. On-premises cloud solutions provide self-service provisioning, elastic scaling, service catalogs, and consumption-based billing while maintaining local control of the physical infrastructure. Examples include Azure Stack HCI, AWS Outposts, Google Anthos on bare metal, and OpenStack deployments. | Hybrid Cloud |
Term | Definition | Category |
---|---|---|
Cloud Migration | The process of moving applications, data, and other business elements from an on-premises data center to cloud environments, or between different cloud environments. Cloud migration strategies include lift-and-shift (rehosting), refactoring/rearchitecting, replatforming, repurchasing, retiring, and retaining. Each approach offers different trade-offs between speed, cost, and optimization for cloud environments. | Strategy |
Cloud Repatriation | The process of moving applications and data from public cloud environments back to on-premises data centers. Organizations pursue repatriation for various reasons including cost control, performance improvements, regulatory compliance, or strategic realignment. Repatriation is often selective, focusing on specific workloads rather than complete cloud exit, and may result in optimized hybrid deployments. | Strategy |
Cloud Cost Optimization | Strategies and practices to reduce spending and improve efficiency in cloud environments. Cost optimization techniques include rightsizing resources, leveraging spot/reserved instances, implementing auto-scaling, removing unused resources, optimizing storage tiers, and using cost allocation tags. Cloud cost optimization is an ongoing process that balances performance requirements with financial constraints. | Financial |
Cloud TCO (Total Cost of Ownership) | A comprehensive assessment of all direct and indirect costs associated with cloud adoption and operation over time. Cloud TCO calculations include obvious expenses like service fees, as well as less visible costs like migration, integration, training, network connectivity, and ongoing management. TCO analysis helps organizations compare different deployment options and understand the full financial impact of cloud decisions. | Financial |
Cloud ROI (Return on Investment) | The measurement of financial benefits gained relative to the cost of cloud computing investments. Cloud ROI assessments quantify both tangible benefits (like reduced hardware costs or faster time-to-market) and intangible benefits (like improved agility or enhanced customer experience). Calculating cloud ROI helps justify investments and prioritize cloud initiatives that deliver the greatest business value. | Financial |
Cloud Exit Strategy | A plan for migrating away from a particular cloud service or provider if necessary. A well-designed exit strategy reduces vendor lock-in risks by documenting processes for data extraction, workload portability, and service replacement. Cloud exit strategies are important for risk management, compliance, and maintaining negotiating leverage with providers even if migration never occurs. | Strategy |
Business Continuity Planning (BCP) | Procedures designed to ensure critical business functions continue during and after disruptive events, including cloud service outages. Cloud-focused BCP addresses provider failures, service disruptions, and other cloud-specific risks through redundancy, geographic distribution, multi-cloud strategies, and documented recovery processes. Effective BCP in the cloud leverages automation, immutable infrastructure, and regular testing to ensure resilience. | Resilience |
Disaster Recovery as a Service (DRaaS) | Cloud-based services that provide backup, replication, and recovery capabilities to protect critical systems and data. DRaaS solutions replicate on-premises or cloud environments to recovery sites, enabling rapid restoration after disasters with minimal customer infrastructure. They typically include automated failover/failback, periodic testing, and customizable recovery point/time objectives (RPO/RTO). | Resilience |
Cloud Service Level Agreement (SLA) | A contractual agreement between a cloud service provider and customer that defines the expected level of service, including availability, performance, data protection, and support response times. Cloud SLAs specify uptime guarantees (typically as a percentage, like 99.9%), define acceptable performance metrics, outline remedies for service failures, and detail the responsibilities of both provider and customer. | Governance |
Cloud Maturity Model | A framework that defines different stages of organizational cloud adoption and capability development. Cloud maturity models typically outline progression from initial exploration and experimentation through standardization, optimization, and eventually to advanced capabilities like cloud-native innovation. These models help organizations assess their current state, identify gaps, and create roadmaps for advancing their cloud initiatives. | Strategy |
Term | Definition | Category |
---|---|---|
Healthcare Cloud | Cloud services specifically designed to meet the unique requirements of the healthcare industry, including HIPAA compliance, protected health information (PHI) security, and integration with healthcare systems. Healthcare clouds provide specialized features for medical imaging storage, clinical data analysis, telehealth services, and compliance monitoring. Examples include AWS for Healthcare, Google Cloud Healthcare API, and Microsoft Cloud for Healthcare. | Industry Cloud |
Financial Services Cloud | Cloud platforms tailored for banking, insurance, and capital markets with built-in controls for regulatory compliance, security, and risk management. Financial services clouds address industry-specific needs like real-time transaction processing, fraud detection, stringent data security, and compliance with regulations such as PCI DSS, SOX, and GDPR. Examples include AWS for Financial Services, IBM Cloud for Financial Services, and Microsoft Cloud for Financial Services. | Industry Cloud |
Government Cloud | Isolated cloud environments that meet the specific security, compliance, and sovereignty requirements of government agencies. Government clouds typically maintain physical separation from commercial cloud infrastructure, employ cleared personnel, and adhere to standards like FedRAMP in the US or equivalent frameworks in other countries. Examples include AWS GovCloud, Azure Government, and Google Cloud for Government. | Industry Cloud |
Media and Entertainment Cloud | Cloud services optimized for content creation, management, distribution, and monetization workflows in the media industry. Media clouds provide specialized capabilities for video processing, content delivery, live streaming, digital asset management, and analytics. They typically offer high-performance storage, powerful rendering capabilities, and global content distribution networks. Examples include AWS for Media & Entertainment, Google Cloud for Media & Entertainment, and Microsoft Azure Media Services. | Industry Cloud |
Retail Cloud | Cloud solutions designed to address retail industry challenges including omnichannel commerce, supply chain management, personalization, and customer experience. Retail clouds provide specialized services for inventory management, point-of-sale systems, recommendation engines, and retail analytics. Examples include AWS for Retail, Google Cloud for Retail, and Microsoft Cloud for Retail. | Industry Cloud |
Gaming Cloud | Cloud platforms optimized for game development, hosting, and delivery. Gaming clouds provide specialized infrastructure for multiplayer game servers, matchmaking, player authentication, virtual worlds, leaderboards, and in-game purchases. They typically offer low-latency networking, global reach, and elastic scaling to handle player demand spikes. Examples include AWS GameTech, Google Cloud for Gaming, and Microsoft Azure PlayFab. | Industry Cloud |
Education Cloud | Cloud solutions tailored for educational institutions, providing tools for digital classrooms, student information systems, learning management, and research computing. Education clouds address specific needs like student privacy (FERPA compliance), scalable computing for research, collaborative learning environments, and budget-friendly licensing models for academic institutions. Examples include AWS for Education, Google Workspace for Education, and Microsoft Education. | Industry Cloud |
High-Performance Computing (HPC) Cloud | Cloud services designed for computationally intensive workloads requiring massive processing power, specialized hardware, and high-throughput networking. HPC clouds enable simulations, modeling, genomics, financial risk analysis, and other complex scientific or engineering applications without requiring dedicated supercomputing infrastructure. They typically offer tightly-coupled compute clusters, bare metal options, GPU/FPGA acceleration, and low-latency networking. Examples include AWS HPC, Azure HPC, and Google Cloud HPC. | Specialized Computing |
VMware Cloud | Cloud services that allow organizations to run their VMware-based workloads natively in public cloud environments without refactoring. These solutions provide a consistent infrastructure platform across on-premises and cloud environments, enabling seamless workload migration and hybrid operations. The VMware software-defined data center stack runs on cloud provider infrastructure but maintains operational consistency with on-premises VMware deployments. Examples include VMware Cloud on AWS, Azure VMware Solution, and Google Cloud VMware Engine. | Specialized Computing |
HPC as a Service (HPCaaS) | On-demand access to high-performance computing resources through cloud providers, eliminating the need for organizations to build and maintain their own supercomputing facilities. HPCaaS offers cloud-based clusters with high-speed interconnects, parallel file systems, and specialized computing instances for simulation, rendering, genomic analysis, AI training, and other compute-intensive workloads. Users pay only for the resources consumed during their HPC jobs, making advanced computing more accessible. | Specialized Computing |
Term | Definition | Category |
---|---|---|
Twelve-Factor App | A methodology for building software-as-a-service applications designed to optimize for cloud deployment. The Twelve-Factor methodology defines best practices for configuration, deployment, scalability, and operations that make applications well-suited to cloud platforms. Principles include storing configuration in the environment, treating backing services as attached resources, executing as stateless processes, and scaling through the process model. | Architecture Pattern |
CQRS (Command Query Responsibility Segregation) | An architectural pattern that separates read and write operations for a data store. In cloud applications, CQRS allows optimization of data models and infrastructure for different workload types, using write-optimized services for commands (data changes) and read-optimized services for queries (data retrieval). This pattern is particularly beneficial for high-performance, scalable cloud applications with asymmetric read/write loads. | Architecture Pattern |
Event Sourcing | A pattern where application state changes are stored as a sequence of events in an append-only log. In cloud environments, event sourcing leverages managed event streaming and serverless services to capture, process, and react to events. This approach provides benefits like complete audit history, temporal querying, and improved scalability through event-driven architectures across distributed cloud services. | Architecture Pattern |
Strangler Fig Pattern | An incremental approach to migrating legacy applications to cloud environments by gradually replacing specific functions with cloud-native services while keeping the overall system running. Like the strangler fig plant that grows around trees, new cloud functionality incrementally surrounds and replaces the legacy system until it can be completely decommissioned. This pattern reduces risk by avoiding complete rewrites and allowing phased migration. | Architecture Pattern |
Backend for Frontend (BFF) | An architectural pattern that creates specific backend services for different frontend applications (web, mobile, IoT). In cloud environments, BFFs are often implemented as serverless functions or containerized microservices that aggregate data from multiple services and optimize it for specific client needs. This approach improves performance, reduces complexity for frontend developers, and optimizes data transfer for various client types. | Architecture Pattern |
API Gateway Pattern | An architectural pattern that provides a unified entry point for clients to access various microservices. In cloud implementations, API gateways handle cross-cutting concerns like authentication, rate limiting, request routing, protocol translation, and response caching. They simplify client interactions with distributed services and provide a consistent interface while decoupling the internal service architecture. Examples include AWS API Gateway, Azure API Management, and Google Cloud API Gateway. | Architecture Pattern |
Sidecar Pattern | A cloud application design pattern where a separate container or process is deployed alongside the main application container to provide supporting features. In cloud environments, sidecars handle cross-cutting concerns like logging, monitoring, security, or service discovery without modifying the primary application. This pattern is particularly common in service mesh implementations, where proxy sidecars manage service-to-service communication. | Architecture Pattern |
Circuit Breaker Pattern | A stability pattern that monitors for failures in distributed cloud systems and stops operation when failures reach a threshold. Like an electrical circuit breaker, it prevents cascading failures by failing fast and providing fallback mechanisms (like serving cached data). After a timeout period, it allows a limited number of test requests to determine if the underlying problem has been resolved before resuming normal operation. | Architecture Pattern |
Bulkhead Pattern | A fault isolation pattern that compartmentalizes system components to prevent failures from spreading. In cloud architectures, bulkheads create boundaries using separate resource pools, instance groups, or failure domains. By limiting the blast radius of failures, bulkheads increase system resilience, similar to how ship bulkheads contain water in case of a breach, preventing the entire vessel from sinking. | Architecture Pattern |
Saga Pattern | A failure management pattern for maintaining data consistency in distributed cloud transactions without two-phase commit protocols. Sagas break long-running transactions into a sequence of local transactions, each with a corresponding compensating transaction to roll back changes if needed. This pattern is particularly important for microservices architectures in the cloud where traditional ACID transactions across services aren’t feasible. | Architecture Pattern |
Term | Definition | Category |
---|---|---|
Sustainable Cloud Computing | Cloud services and practices designed to minimize environmental impact through energy efficiency, renewable energy usage, and optimized resource utilization. Sustainable cloud initiatives focus on reducing carbon footprints through green data centers, carbon-aware workload scheduling, and energy-efficient hardware. Cloud providers increasingly offer sustainability dashboards, carbon footprint metrics, and tools to help customers optimize for environmental impact alongside cost and performance. | Sustainability |
Cloud Carbon Footprint | The measure of greenhouse gas emissions associated with cloud services, including electricity consumption, cooling, hardware manufacturing, and network operations. Cloud carbon footprint tools calculate emissions based on resource usage, regional energy grids, and provider-specific efficiency metrics, helping organizations understand and reduce their environmental impact. These tools assist in sustainable architecture decisions, such as selecting low-carbon regions or optimizing resource utilization. | Sustainability |
Sovereign Cloud | Cloud environments designed to address data sovereignty, regulatory compliance, and national security requirements by ensuring data and operations remain within specific geographic or jurisdictional boundaries. Sovereign clouds provide governance controls that protect sensitive data from foreign government access while delivering modern cloud capabilities. These solutions are particularly important for public sector, defense, critical infrastructure, and highly regulated industries. | Compliance |
Observability Platform | Integrated cloud solutions for monitoring, logging, tracing, and analyzing complex distributed systems to understand their internal states from external outputs. Cloud observability platforms unify diverse telemetry data to provide holistic visibility into application performance, user experience, and system health across microservices, serverless functions, and traditional workloads. These platforms leverage AI for anomaly detection and automated root cause analysis. Examples include Datadog, New Relic, Dynatrace, and cloud provider offerings like AWS CloudWatch, Azure Monitor, and Google Cloud Operations. | Monitoring |
eBPF (Extended Berkeley Packet Filter) | A technology that allows programs to run within the Linux kernel without changing kernel source code or loading modules, providing powerful capabilities for security, networking, and observability in cloud environments. eBPF enables fine-grained monitoring, efficient service mesh data planes, advanced network functions, and security enforcement with minimal performance overhead. Cloud-native projects leveraging eBPF include Cilium (networking), Falco (security), and Pixie (observability). | Technology |
WASM (WebAssembly) | A binary instruction format designed as a portable compilation target for programming languages, enabling high-performance applications on the web. In cloud computing, WASM is expanding beyond browsers to become a universal runtime for secure, portable, and efficient cloud functions, API gateways, edge computing, and plugin systems. WASM provides near-native performance, language flexibility, and security isolation with smaller footprints than containers in some use cases. | Technology |
Cloud IDE (Integrated Development Environment) | Browser-based development environments hosted in the cloud that allow developers to write, test, and debug code without local setup. Cloud IDEs provide consistent development experiences across devices, pre-configured environments, integrated cloud services, and collaborative features. They eliminate “works on my machine” problems and streamline onboarding by providing immediate access to ready-to-use development environments. Examples include GitHub Codespaces, AWS Cloud9, Gitpod, and Visual Studio Codespaces. | Development |
Policy as Code | The practice of defining, managing, and enforcing infrastructure policies using code and automation rather than manual processes. In cloud environments, policy as code allows organizations to implement governance guardrails for security, compliance, cost, and operational standards consistently across diverse resources. The policies are version-controlled, testable, and automatically enforced during provisioning or runtime. Examples include tools like Open Policy Agent (OPA), HashiCorp Sentinel, AWS CloudFormation Guard, and Azure Policy. | Governance |
Cloud Development Kit (CDK) | Programming libraries that allow developers to define cloud infrastructure using familiar programming languages instead of configuration files or domain-specific languages. CDKs generate infrastructure as code templates (like CloudFormation or Terraform) but provide benefits of traditional programming including type checking, reusable components, and IDE support. They enable more sophisticated infrastructure patterns and better integration with application code. Examples include AWS CDK, Azure Developer CLI, and the Cloud Development Kit for Terraform (CDKTF). | Development |
Immutable Infrastructure | A cloud infrastructure management approach where components are never modified after deployment; instead, they are replaced entirely when changes are needed. This pattern eliminates configuration drift and ensures that development, testing, and production environments remain identical. Immutable infrastructure is typically implemented using infrastructure as code, containers, and automated deployment pipelines, enabling consistent, reproducible, and reliable system states. | DevOps |
Resources for Ongoing Learning For the most up-to-date information on cloud computing terminology and best practices, consider these authoritative resources:
1. AWS Cloud Essentials
2. Microsoft Azure Documentation
4. NIST Cloud Computing Standards
5. Cloud Security Alliance Research
6. Cloud Native Computing Foundation
7. The Open Group Cloud Computing Work Group
This cloud computing glossary provides a comprehensive foundation for understanding cloud computing terminology, but the field is rapidly evolving with new services, best practices, and technologies being introduced regularly. Staying connected with the community through blogs, forums, conferences, and certification programs is the best way to keep your cloud knowledge current.