Azure Cloud

Day 2: AZ-204 Training – Azure’s storage cloud solutions

To myself…

Azure Blob Storage

The world has becoming online and undeniably, there’s a massive amounts of unstructured data that needs to be stored and retrieved. But unstructured data inherently doesn’t have a definition or a data model. Azure’s answer to this problem is Blob Storage.

A Blob storage can store and serve

  • images and documents – directly from browser
  • video, audio – streaming
  • log files – written by applications
  • backups
  • analysis


A Blob storage exposes HTTP and HTTPS endpoints via Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage client library for users or applications to retrieve blobs / objects globally.

3 Types of Blobs

TypeExampleMax size
Blocktext, binary data4.7TB
Append logs
PageVHD files (disks)8TB


All data is automatically encrypted using Storage Service Encryption (SSE).

RBAC can be scoped to storage account, individual container or queue

Data at rest is encrypted using 256-bit AES and is FIPS 1402 compliant

Encryption does not affect Azure Storage performance

Encryption has no additional cost

Encryption keys are either Microsoft-managed or customer-managed or customer-provided.

management: enc/dec ops, supported services, key storage, key rotation, key usage, key access


scope: data center, region-wide,







Azure Cosmos DB

Globally distributed database service designed to provide low latency, elastic scalability of throughput, well-defined semantics for data consistency and high availability.

Data consistency as spectrum of choice

For a read operation, consistency corresponds to the freshness and ordering of the database state being requested.

Cosmos DB offers 5 well-defined consistency models backed by comprehensive SLAs.

StrongLatest committed write is returned
Bounded stalenessReads might lag behind writes by at most “K” versions/updates or by “T” time interval
Consistent prefix
Eventual consistencyNo ordering guarantee for reads

Consistency guarantees

What consistency level to choose?


blobs, disks, files, queues, tables

Azure Cloud

Day 1: AZ-204 Training – Running Apps on Azure App Service

To myself…

In Azure platform, a speedy way to create an app requires spinning up an App Service that is tied to an App Service Plan.

An App Service is a family of services (Web Apps, API Apps, Mobile Apps) to run applications on the cloud.

An App Service Plan defines the set of compute resources for an app or apps running in an App Service. The compute resources defined are:

  • OS
  • Region
  • Number of VMs
  • Size of VMs
  • Pricing Tier (Free, Shared, Basic, Standard, Premium, PremiumV2, PremiumV3, Isolated)

The pricing tier defines how much you play for the plan and the subset of App Service features.

App Service Plan pricing

Free and Shared tiers

App runs on a shared VM instance with other apps of other customers running. Each app receives CPU minutes (CPU quota) on shared VM instance and cannot scale out.

Dedicated – Basic, Standard, Premium, PremiumV2 and PremiumV3

App runs on dedicated Azure VMs. Only apps in the same App Service plan share the same compute resources. The higher the tier, the more VM instances are available for scaling out.


Provides network isolation on top of compute isolation to the apps. It has maximum scale out capabilities.

App Service Plan features

The features include custom domains, TLS/SSL certificates, autoscaling, deployment slots, backups, Traffic Manager integration, etc.

<put the features per tier>


You can potentially save money by putting multiple apps into one App Service plan.

If your app is resource-intensive, it’s better to create a new isolated App service plan.

Deployment slots can be used to release an app with no downtime. It can also be used to test a new version of your app to see if how users use a new feature.

Backups are available in App Service too. Services that are connected to App Service such as Account Storage or SQL Database can be backed up and restore as needed.

Easy authentication allows authentication of users via Azure Active Directory or Azure Active Directory B2C without making changes to application and configuration.

MySQL in App is a cost-effective way to run a database. However, since this is inside an App Service, it’s not readily available to scale.

Azure Cloud

Provisioning is Not the Same As Configuration

To myself… May you use this term in the right context so you don’t confuse people.

What is Provisioning?

Provisioning is the process of setting up an infrastructure. In generic terms, it is bring resources to life to make them available to users and systems.

Example: Provisioning a load balancer means setting up a load balancer within certain cluster of services.

What is Configuration?

Configuration is the process of establishing resources into a desired state for building and maintenance.

Example: Configuring a load balancer means telling it how it should route requests to the right services within a certain cluster.

Provision Then Configure

You can’t configure something that doesn’t exist right? 😛 As the heading says, once a resource is provisioned, the next step is to tell it how to behave, that is, by configuration.

A good thing to note is that both provisioning and configuration are part of the deployment process of software development workflow 🙂

Azure Cloud Internet of Things

Getting to Know Azure IoT Device Twin

To myself…

A Brief History of Time

Before the idea of device twin, it is historically difficult to query the device information in the field. A lot of network hops or even database synchronisation can happen to retrieve this information. But looking closer at the problem, most of the use cases are, to show the information in a dashboard for monitoring or controlling certain assets of an organisation – there’s no hard real-time requirement. Hence, with this clarity of the problem, the device twin was born in Azure.

With device twin, the backend services can interact with Azure IoT Hub to query the metadata and data of each devices. This capability enables scenarios like device reporting on dashboards or monitoring long-running jobs across many devices.

It is important to note that because of the nature of device twin updates is asynchronous in nature, it is not guaranteed that the values you get from the query are real-time. What you will actually read are the last reported values. Most organisations will accept this latency for the majority of their use cases.

What is Device Twin?

A device twin is a JSON document that contains device-specific information. The document has a size limit of 8KB and a maximum depth of 5 levels. The format is:

"identity": {
"tags": {
"properties": {
   "desired": {
      "status": <>,
      "$version": <>,
   "reported": {
      "status": <>,
      "$version": <>,

Example of Device Twin JSON File Content


How is Device Twin Created?


How Does Device Twin Work?

<insert diagram here… later>

Field Permissions During Device Twin Interactions

fieldbackend servicedevice
desired propertiesrwr
reported propertiesrrw

to be continued…

Azure Cloud

Demystifying SAS Tokens

To myself… May you thank the inventors of SAS tokens by heart. Through them, security and policies have never been easier!

Why SAS Tokens?

SAS tokens make access management in the cloud a breeze.

Through SAS tokens, granular access to various cloud resources are possible. Rather than letting clients use an account key that has the widest privileges to act on resources, SAS tokens are utilised instead. For example, if you have an Azure Storage, you can specify in the token that a client can access only container-level or object-level resources instead of the entire service.

That’s not only the advantage of using SAS tokens. It also removes the need of exposing the account key. The fear of account key being leaked is solved by using SAS tokens. It is used to easily manage the revocation of tokens when they are compromised by any means.

How SAS Tokens Are Made?

To generate a SAS tokens to let clients access a specific resource, here are the things to declare:

  1. URI – the location and name of the resource
  2. Expiry – the date and time when the token is no longer valid
  3. Key – the key that will sign the token to create the signature
  4. Crypto library – any well-known and trusted library that will hash claims and sign with the key

To be continued…

Azure Cloud Internet of Things

Azure IoT Hub vs Azure Event Hub

I was initially confused when to use IoT Hub and when to use Event Hub for collecting data from IoT-enabled devices. One reason is the generic concept I had in mind of sending raw data into a cloud hub for data processing. Both the IoT Hub and Event Hub can capture messages (events) – receive telemetry (messages) – and process and/or store the data for insights. They look the same at one glance but they’re actually built for different purposes.

IoT hub is designed to connect devices to the cloud. It’s a bidirectional communication between the device and cloud. The benefits that come with this type of communication support is updating the device with new configurations and performing actions upon requests such as updating intelligent processing (ML) at IoT edge device.

Event Hub is designed as to stream millions of messages per second. It’s a unidirectional communication. In line with this streaming process, there’s also a processing that’s happening. Several big data analytics services including Databricks, Stream Analytics, HDInsight can read and process data from this hub. The identity is done with shared access signature.

Azure Cloud

Communication in Cloud-native System

In a cloud-native system, modules are containerised. They talk to each other through APIs and other messaging protocols. In a typical web app, the front-end talks to the backend over a network protocol. These includes

  • network-related concerns: congestion, latency, transient faults
  • resiliency (retrying a failed request)
  • idempotency
  • authentication and authorisation
  • message serialisation and deserialisation (can be expensive)
  • message encryption and decryption

This example of cloud-native system implements a microservice-based architecture with many, small, independent microservices. Each microservice lives in a container and deployed to a cluster.

A cluster groups a pool of VMs together to form a highly available environment. They’re managed with an orchestration tool called Kubernetes.

to be continued..