Categories
JavaScript Programming

Advanced Javascript Objects Introduction

To myself…

Using this keyword

Suppose we have an object

const myDog = {
   breed: 'dalmatian',
   bark() {
      console.log('woof');
   },
   getBreed() {
      console.log(breed);
   }
}

When we run myDog.getBreed() we expect that we will see dalmatian in our console. However, in actual, we get an error ReferenceError: breed is not defined. Why is that?

The reference error surfaces because by default, an object method does not have visibility inside the calling object’s properties.

To indicate that we are referring to the calling object’s breed property, we need to use this as in

const myDog = {
   breed: 'dalmatian',
   bark() {
      console.log('woof');
   },
   getBreed() {
      console.log(this.breed);
   }
}

The this keyword references the calling object allowing us to access what’s inside the myDog object.

Now suppose we use an arrow function in our object method

const myDog = {
   breed: 'dalmatian',
   bark() {
      console.log('woof');
   },
   getBreed: () => {
      console.log(this.breed);
   }
}

Surprisingly, we will get undefined as the log. Why is this so?

Using an arrow function inherently binds this not to the calling object but to the global object. Currently, there’s no breed property in the global scope so the getBreed arrow function returns undefined.

What does this mean? It means we should avoid using arrow function when using this in a method.

Conveying privacy

In Javascript, only by naming convention we can make a property private using underscore prefix as in

const imAnObject = {
   _myPrivateProperty: 'i am private'
}

Creating factory functions

Just like a real world factory that manufactures copies of an item quickly and on a massive scale, a factory function is a function that returns an object and can be reused to make multiple object instances.

Why is this useful? With a factory function, we don’t need to create an object literal every time we need a new object. We can invoke the factory function with the necessary arguments to make an object for us!

Categories
JavaScript Programming

Undefined vs Null in JavaScript

To myself… May I help other fellow coders use undefined and null appropriately through this short article.

Variable declaration and initialisation

Before having a solid understanding of the differences between undefined and null, it’s important to know the steps in declaring and initialising a variable.

Declaring is giving the variable a name.

let catName;

Initialising is giving the variable a value

catName = "Felicia";

Combining these two statements, we have a variable that has been declared with a name catName and a value of Felicia

let catName; // declaration
catName = "Felicia"; // initialisation

Undefined

Technically speaking, undefined means lack of an assigned value.

You can think of undefined as the default value of variable when it has not been assigned a value.

Rephrased slightly, undefined means either (1) variable is not declared (2) variable is declared but no value is assigned to it.

Null

With null, it means variable is declared and the coder explicitly set the value to null.

Null can be used to mean that the value does exist but at this point it is not yet known. For example an online form with an age field. Setting the age variable with null value means we know that the person’s age exists but we don’t know it yet (since the person has not filled in the details yet).

Difference in data types

The undefined is in its own data type whereas null is an object. Pretty confusing but that’s how undefined and null are designed in JavaScript.

Something to think about

Typically, you can lose the context between the JavaScript language assigning the variable a value and you as a coder explicitly assigning the variable a value. Hence, if you need to assign an emptiness to the variable, null is used. If it’s the JavaScript to assign the value, you just need to declare the variable and it will have implicit default value of undefined.

Categories
JavaScript Object-Oriented Design Programming

JavaScript Class and Method

To myself… Even if you hate how JavaScript classes are designed, you must know the class context by heart to avoid encountering type error exceptions in your code.

Creating an object

Suppose we need a JavaScript program to log any dog name to the browser console. We can start off by creating a blueprint of a Dog using class keyword in ES6 like this:

class Dog {
  printName(name) {
    this.print(`My name is ${name}`)
  }

  print(text) {
    console.log(text)
  }
}

const myDog = Dog()
myDog.printName('Tyler') // error here

What’s interesting in this piece of code is that the printName context changes and expectations are thrown.

Uncaught TypeError: Cannot read property 'print' of undefined

This is because the methods inside a class are bound to the class prototype and NOT to the class instance.

Binding a method to the class instance

So what we can do to make our code work?

One way is to bind the method to the class instance explicitly using bind keyword in the constructor by hand.

class Dog {
  constructor () {
    this.printName.bind(this)
  }
}

There are other ways to attach the method to the object instance but for now, we’ll stick to this since it’s the easiest and straight forward solution.

Categories
Azure Cloud

Day 2: AZ-204 Training – Azure’s storage cloud solutions

To myself…

Azure Blob Storage

The world has becoming online and undeniably, there’s a massive amounts of unstructured data that needs to be stored and retrieved. But unstructured data inherently doesn’t have a definition or a data model. Azure’s answer to this problem is Blob Storage.

A Blob storage can store and serve

  • images and documents – directly from browser
  • video, audio – streaming
  • log files – written by applications
  • backups
  • analysis

Access

A Blob storage exposes HTTP and HTTPS endpoints via Azure Storage REST API, Azure PowerShell, Azure CLI, or an Azure Storage client library for users or applications to retrieve blobs / objects globally.

3 Types of Blobs

TypeExampleMax size
Blocktext, binary data4.7TB
Append logs
PageVHD files (disks)8TB

Security

All data is automatically encrypted using Storage Service Encryption (SSE).

RBAC can be scoped to storage account, individual container or queue

Data at rest is encrypted using 256-bit AES and is FIPS 1402 compliant

Encryption does not affect Azure Storage performance

Encryption has no additional cost

Encryption keys are either Microsoft-managed or customer-managed or customer-provided.

management: enc/dec ops, supported services, key storage, key rotation, key usage, key access

Redundancy

scope: data center, region-wide,

LRS

ZRS

GRS

RA-GRS

GZRS

RA-GZRS

Azure Cosmos DB

Globally distributed database service designed to provide low latency, elastic scalability of throughput, well-defined semantics for data consistency and high availability.

Data consistency as spectrum of choice

For a read operation, consistency corresponds to the freshness and ordering of the database state being requested.

Cosmos DB offers 5 well-defined consistency models backed by comprehensive SLAs.

StrongLatest committed write is returned
Bounded stalenessReads might lag behind writes by at most “K” versions/updates or by “T” time interval
Session
Consistent prefix
Eventual consistencyNo ordering guarantee for reads

Consistency guarantees

What consistency level to choose?

Keywords

blobs, disks, files, queues, tables

Categories
Azure Cloud

Day 1: AZ-204 Training – Running Apps on Azure App Service

To myself…

In Azure platform, a speedy way to create an app requires spinning up an App Service that is tied to an App Service Plan.

An App Service is a family of services (Web Apps, API Apps, Mobile Apps) to run applications on the cloud.

An App Service Plan defines the set of compute resources for an app or apps running in an App Service. The compute resources defined are:

  • OS
  • Region
  • Number of VMs
  • Size of VMs
  • Pricing Tier (Free, Shared, Basic, Standard, Premium, PremiumV2, PremiumV3, Isolated)

The pricing tier defines how much you play for the plan and the subset of App Service features.

App Service Plan pricing

Free and Shared tiers

App runs on a shared VM instance with other apps of other customers running. Each app receives CPU minutes (CPU quota) on shared VM instance and cannot scale out.

Dedicated – Basic, Standard, Premium, PremiumV2 and PremiumV3

App runs on dedicated Azure VMs. Only apps in the same App Service plan share the same compute resources. The higher the tier, the more VM instances are available for scaling out.

Isolated

Provides network isolation on top of compute isolation to the apps. It has maximum scale out capabilities.

App Service Plan features

The features include custom domains, TLS/SSL certificates, autoscaling, deployment slots, backups, Traffic Manager integration, etc.

<put the features per tier>

Tips

You can potentially save money by putting multiple apps into one App Service plan.

If your app is resource-intensive, it’s better to create a new isolated App service plan.

Deployment slots can be used to release an app with no downtime. It can also be used to test a new version of your app to see if how users use a new feature.

Backups are available in App Service too. Services that are connected to App Service such as Account Storage or SQL Database can be backed up and restore as needed.

Easy authentication allows authentication of users via Azure Active Directory or Azure Active Directory B2C without making changes to application and configuration.

MySQL in App is a cost-effective way to run a database. However, since this is inside an App Service, it’s not readily available to scale.

Categories
Web Development

Using Babel in React App

To myself… Remind yourself about the Javascript basics, again.

New developers often use JSX and ES6 to write React code with create-react-app without knowing much as to how the browser can render the page correctly.

If curiosity has struck you now, the answer to that is through Babel.

What is Babel?

Babel is a Javascript compiler that translates source files into ES5 which is the standard specification implemented in major browsers today. It can compile JSX into ES5 Javascript functions as well as compile ES6 into ES5 Javascript. This process is known as transpilation because it turns the original source code into a new source code rather than outputting executable/s.

Configuring Babel for React

In order to transpile JSX or ES6 source codes, we need to configure Babel. Inherent in itself, it is very much configurable and luckily, there are some very useful presets of configuration we can easily use and install. These are @babel/preset-env @babel/preset-react.

After the installation is complete, we need to create a configuration file, typically named babel.rc located in the root folder.

{
   "presets": [
      "@babel/preset-env",
      "@babel/preset-react"
   ]
}

Now having these all set up, we can now write code in JSX and ES6 and execute the output files in the browser.

Categories
DevOps Programming Web Development

ConfigMap and Secrets in Kubernetes

To myself…

Most apps require settings that are unique to themselves. Some of these settings can be credentials, others are not so sensitive but still need to be managed when these apps get deployed. These settings are what we usually called configs or configurations in software domain.

A little history

Going back to the old ways of doing things, when an app is deployed, it will usually contain a file alongside it that will hold the sensitive and non-sensitive configuration. After the introduction of Docker, it’s a very similar way of deployment except that the image that is packaged also contains the configuration within it.

So what’s so wrong with this approach? Well for one, when some config needs to be changed, the image needs to be recompiled and redeployed again. Another reason is that the confidential information such as API keys or passwords is exposed as plaintext. Anyone that can get into the system can alter these things!

What about using volumes in Docker? That’s a good catch but volumes for config files still require to or mounted first before the container is started.

So is there really a way to solve these problems better? Yes of course! Using Kubernetes ConfigMap and Secret are the keys to manage these issues.

What is a ConfigMap?

ConfigMap is a top-level Kubernetes resource to store configuration data, called environment variables, inside a container in a decoupled way. At its core, ConfigMap is a map of key/value pairs.

What do you mean by decoupled? It means that the running container doesn’t need to know the existence of the ConfigMap because upon the creation of ConfigMap, its contents are passed to the container as either environment variables or as files in a volume.

Creating a ConfigMap

There are 3 ways to create a ConfigMap in Kubernetes.

  1. with a literal
  2. with a manifest file
  3. with a config file
  4. with an env file

Let’s start an example of informing our app how to locate the API for users. In here we define apiUrl as the key and the value is https://my-site.com/api/users

Option 1

There’s no file. We directly tell Kubernetes our setting/s from the command line.

Command to run

kubectl create configmap [cm-name] 
--from-literal=apiUrl=https://my-site.com/api/users

Option 2

There needs to be a manifest file and the format is

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-settings
  labels:
    app: app-settings
data:
  apiUrl: "https://my-site.com/api/users"

Command to run

kubectl create -f configmap.yml

Notice that the command doesn’t need to explicitly invoke the configmap. This is because the manifest file declares the kind of file as ConfigMap.

Option 3

A typical config file, let’s call it api.config

apiUrl=https://my-site.com/api/users

Command to run

kubectl create configmap [cm-name]
--from-file=[path-to-file]

Under the hood, Kubernetes will generate a ConfigMap

apiVersion: v1
kind: ConfigMap
data:
  api.config: |-
    apiUrl=https://my-site.com/api/users

The file name api.config is the key.

Option 4

With env file, for instance, api.env

apiUrl=https://my-site.com/api/users

Command to run

kubectl create configmap [cm-name]
--from-env-file=[path-to-file]

Under the hood, Kubernetes will generate a ConfigMap

apiVersion: v1
kind: ConfigMap
data:
  apiUrl=https://my-site.com/api/users

The file name api.env is not included as the key.

Consuming a ConfigMap

So now that we know how to create a ConfigMap, what’s next? Well, the following step is to create it of course and let it be consumed by pods or containers.

Accessing a ConfigMap via Environment Variables

Let’s say we opted for option 4 using env file. We can let the pod access the ConfigMap by appending an env key like below.

apiVersion: apps/v1
...
spec:
  template:
    ...
  spec:
    containers: ...
    env:
    - name: APIURL
     valueFrom:
       configMapKeyRef:
         name: app-settings
         key: apiUrl

The pod now has an environment variable name called APIURL with its value reference from the ConfigMap data apiUrl.

In reality, the env file will contain more entries. In this case, if there’s a need to use up all the environment variables without declaring them one after another, we can use envFrom key instead of env key.

envFrom:
- configMapRef:
  name: app-settings

Now all the entries in the env file is available to the pod 🙂

Accessing a ConfigMap via Volume

With this approach, it is possible to change the settings without redeploying the container. Although there would be 30-60 second delay, for most cases, this is sufficient.

apiVersion: v1
...
spec:
  template:
    ...
  spec:
    volumes:
      - name: app-config-vol
        configMap
          name: app-settings
    containers:
      volumeMounts:
        - name: app-config-vol
          mountPath: /etc/config

Categories
Web Development

The Anatomy of a URL

To myself… You cannot truly call your self a web guru if you can’t even explain the parts of a URL.

URL is one of the fundamentals to learn in web development. It stands for uniform resource locator. The browser uses a URL to make a request to a server (a kind of resource) for some information. Without URL, it’s going to be impossible or even messy to locate a resource in the vast ocean of the internet.

Components

Basically, a URL is composed of 4 parts

schemeidentifies the protocol to be used to access the resource
hostthe host name that holds the resource
portthe port the the resource is served (https is 443, http is 80)
paththe specific resource in the host (a host can have multiple resources)
queryfilter/s to retrieve the desired resource
scheme://host:port/path?query

Illustration

I have included other nuances of a URL for my own reference 😃

Categories
DevOps Networking

What Is a Proxy Server?

To myself…

A proxy server is typically a gateway that sits between two entities involved in a request-response model.

Forward proxy

Before getting to specifics, there are 2 types of proxies used in worldwide web. The first type is the forward proxy server. It sits between a client (usually a browser controlled by a user) and an external network, in this case the internet.

Usually, forward proxy is used to limit the client from accessing certain websites or even malicious ones. Let’s take the example of a university setting with students using the school wifi. If they try to hit certain websites such as Facebook, the proxy server has the ability to block the requests. (sidetrack: and thus making students more focused on their studies rather than watching cute puppies and cats in their feed – just kidding 😆)

Forward proxy is also used to reach servers that are only accessible in certain geographic locations. For example, some tv series are only available in UK however. If you’re in another region, you can use VPN server sitting on UK to make the request on your behalf. It’s tricking the media servers that you are located in UK even though you’re not. Cool right? 🙂

Reverse Proxy

Now that you have the big picture understanding of the forward proxy, let us discuss the second type which is the reverse proxy. This type of proxy sits between numerous private resources and the outside network full of clients.

Again, to be more specific, let’s say the outside network is the internet with users using their browser to access your servers. If your servers contain private data that is only accessible by the users having an account to your website, you will not expose these servers directly to the internet. Typically, you will place a proxy in front of all these public clients to receive, validate and return the response based on their credentials.

Also, validating credentials isn’t only what reverse proxy is meant to do. A well-known function of reverse proxy is load balancing. Let’s take for example an e-commerce website. If your website is popular and millions of people are interacting with it daily, having one server to serve all the requests will likely lead to a slow response time. In most cases, it’s a bad user experience which may lead to users finding better alternatives – you don’t want to lose your customers, right? 😵‍💫 So what you do is spin up multiple servers and balance (I won’t discuss the algorithms here ) the requests coming in to your website.

Wrap up

In this short article, I’ve discussed very briefly what is proxy server and its two types. The forward proxy server works to protect or limit the client from accessing the external network (usually the internet). The reverse proxy server on one hand works to protect and balance the server workloads in your network from the public clients making requests to retrieve information.

Categories
DevOps Software Architecture

Running PostgreSQL in Kubernetes

To myself…

Gone are the days that we write long scripts to provision and run our databases in an on-premise server, that is, for most cases that don’t need to comply with a lot of regulatory policies. It’s worth looking at how we can deploy a database in the cloud in just a few lines of code.

Steps

Install a Kubernetes operator in a cloud-based VM

Command

helm repo add postgres-operator https://raw.githubusercontent.com/zalando/postgres-operator/master/charts/postgres-operator
helm install postgres-operator postgres-operator/postgres-operator

Output

/workspace $ helm repo add postgres-operator https://raw.githubusercontent.com/zalando/postgres-operator/master/charts/postgres-operator
"postgres-operator" has been added to your repositories

/workspace $ helm install postgres-operator postgres-operator/postgres-operator
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
manifest_sorter.go:192: info: skipping unknown hook: "crd-install"
NAME: postgres-operator
LAST DEPLOYED: Wed Sep 15 02:44:28 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that postgres-operator has started, run:

  kubectl --namespace=default get pods -l "app.kubernetes.io/name=postgres-operator"

Great. The PostgreSQL is now deployed in Kubernetes default namespace.

To check if it’s running,

/workspace $ kubectl get pods -l "app.kubernetes.io/name=postgres-operator"
NAME                                READY   STATUS    RESTARTS   AGE
postgres-operator-978857b4d-z6g88   1/1     Running   0          10m

Install the admin dashboard (optional)

I won’t discuss how to get into the dashboard. I’ve added this so I can refer to this article in the future if I need to configure a dashboard for Postgresql.

Command

helm repo add postgres-operator-ui https://raw.githubusercontent.com/zalando/postgres-operator/master/charts/postgres-operator-ui
helm install postgres-operator-ui postgres-operator-ui/postgres-operator-ui --set service.type="NodePort" --set service.nodePort=31255

Output

/workspace $ helm repo add postgres-operator-ui https://raw.githubusercontent.com/zalando/postgres-operator/master/charts/postgres-operator-ui
"postgres-operator-ui" has been added to your repositories

/workspace $ helm install postgres-operator-ui postgres-operator-ui/postgres-operator-ui --set service.type="NodePort" --set service.nodePort=31255
NAME: postgres-operator-ui
LAST DEPLOYED: Wed Sep 15 02:57:51 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that postgres-operator has started, run:

  kubectl --namespace=default get pods -l "app.kubernetes.io/name=postgres-operator-ui"

Check if it’s running

/workspace $ kubectl get pods -l "app.kubernetes.io/name=postgres-operator-ui"
NAME                                    READY   STATUS    RESTARTS   AGE
postgres-operator-ui-6b4dd8cfbb-gvkqb   1/1     Running   0          90s

Verify that we have installed the PostgreSQL operator and UI

Command

kubectl get pods,services,deployments,replicasets

Output

/workspace $ kubectl get pods,services,deployments,replicasets
NAME                                        READY   STATUS              RESTARTS   AGE
pod/postgres-operator-ui-6b4dd8cfbb-6lrd2   0/1     ContainerCreating   0          19s
pod/postgres-operator-978857b4d-qflvs       1/1     Running             0          32s

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes             ClusterIP   10.43.0.1       <none>        443/TCP        46s
service/postgres-operator      ClusterIP   10.43.145.110   <none>        8080/TCP       35s
service/postgres-operator-ui   NodePort    10.43.7.84      <none>        80:31255/TCP   19s

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/postgres-operator-ui   0/1     1            0           19s
deployment.apps/postgres-operator      1/1     1            1           35s

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/postgres-operator-ui-6b4dd8cfbb   1         1         0       19s
replicaset.apps/postgres-operator-978857b4d       1         1         1       32s

Describe how the database server should be created

Add the following configuration in the /workspace/db.yaml file

/workspace $ ls
db.yaml  metallb-config

/workspace $ cat db.yaml
apiVersion: "acid.zalan.do/v1"
kind: postgresql
metadata:
  name: dataops-bootcamp-cluster
  namespace: default
spec:
  teamId: "dataops-bootcamp"
  volume:
    size: 1Gi
  numberOfInstances: 2
  users:
    dataops:  # database owner
    - superuser
    - createdb
    learner_user: []  # role for application foo
  databases:
    dataops: learner  # dbname: owner
  postgresql:
    version: "12"

Apply the configuration to the cluster

/workspace $ kubectl apply -f /workspace/db.yaml 
postgresql.acid.zalan.do/dataops-bootcamp-cluster created

Get the status of the PostgreSQL resource. It should show running.

/workspace $ kubectl get postgresql --watch
NAME                       TEAM               VERSION   PODS   VOLUME   CPU-REQUEST   MEMORY-REQUEST   AGE   STATUS
dataops-bootcamp-cluster   dataops-bootcamp   12        2      1Gi                                     53s   Running

Notice that it has 2 pods because we declare in the configuration file that the instances is 2.

Let’s view the details of the cluster

/workspace $ kubectl describe postgresql
Name:         dataops-bootcamp-cluster
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  acid.zalan.do/v1
Kind:         postgresql
Metadata:
  Creation Timestamp:  2021-09-15T03:09:13Z
  Generation:          1
  Managed Fields:
    API Version:  acid.zalan.do/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:databases:
          .:
          f:dataops:
        f:numberOfInstances:
        f:postgresql:
          .:
          f:version:
        f:teamId:
        f:users:
          .:
          f:dataops:
          f:learner_user:
        f:volume:
          .:
          f:size:
    Manager:      kubectl-client-side-apply
    Operation:    Update
    Time:         2021-09-15T03:09:13Z
    API Version:  acid.zalan.do/v1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:PostgresClusterStatus:
    Manager:         postgres-operator
    Operation:       Update
    Time:            2021-09-15T03:09:13Z
  Resource Version:  1045
  Self Link:         /apis/acid.zalan.do/v1/namespaces/default/postgresqls/dataops-bootcamp-cluster
  UID:               96b5b159-bf07-413f-8ff8-5a2204998a07
Spec:
  Databases:
    Dataops:            learner
  Number Of Instances:  2
  Postgresql:
    Version:  12
  Team Id:    dataops-bootcamp
  Users:
    Dataops:
      superuser
      createdb
    learner_user:
  Volume:
    Size:  1Gi
Status:
  Postgres Cluster Status:  Running
Events:
  Type    Reason       Age    From               Message
  ----    ------       ----   ----               -------
  Normal  Create       3m24s  postgres-operator  Started creation of new cluster resources
  Normal  Endpoints    3m24s  postgres-operator  Endpoint "default/dataops-bootcamp-cluster" has been successfully created
  Normal  Services     3m24s  postgres-operator  The service "default/dataops-bootcamp-cluster" for role master has been successfully created
  Normal  Services     3m24s  postgres-operator  The service "default/dataops-bootcamp-cluster-repl" for role replica has been successfully created
  Normal  Secrets      3m23s  postgres-operator  The secrets have been successfully created
  Normal  StatefulSet  3m23s  postgres-operator  Statefulset "default/dataops-bootcamp-cluster" has been successfully created
  Normal  StatefulSet  2m32s  postgres-operator  Pods are ready

Wrap up

In this scenario, we looked into how we can install PostgreSQL in the cloud with ease. We used helm to install a Kubernetes PostgreSQL Operator and UI. Then we created a k8s config file to declare 2 instances of the database. Finally, we instructed k8s with a simple apply command to spin up the 2 stateful pods based on the config file.