Skip to content

Blog

AWS

General

Non Tech

LeetCode

Easy

Medium

Hard

Add Two Numbers

You are given two non-empty linked lists representing two non-negative integers. The digits are stored in reverse order and each of their nodes contain a single digit. Add the two numbers and return it as a linked list.

You may assume the two numbers do not contain any leading zero, except the number 0 itself.

Example 1:

Bash
Input: (2 -> 4 -> 3) + (5 -> 6 -> 4)

Output: 7 -> 0 -> 8

Explanation: 342 + 465 = 807.

Example 2:

Bash
Input: l1 = [9,9,9,9,9,9,9], l2 = [9,9,9,9]
Output: [8,9,9,9,0,0,0,1]

Solution Explanation

The approach for solving this problem will be, to traverse through each node of the linked list, carry out a sum on every step and return the final result. We just need to know the linked list data structure and the problem will be solved really easily.

solution.py
# Definition for singly-linked list.
# class ListNode:
#     def __init__(self, val=0, next=None):
#         self.val = val
#         self.next = nex
class Solution:
    def addTwoNumbers(self, l1: Optional[ListNode], l2: Optional[ListNode]) -> Optional[ListNode]:

      dummy = ListNode()
      current = dummy
      carry = 0

      while (l1 or l2 or carry):
        n1 = l1.val if l1 else 0
        n2 = l2.val if l2 else 0
        s = n1 + n2 + carry

        current.next = ListNode(s%10)
        current = current.next
        carry = s//10

        l1 = l1.next if l1 else None
        l2 = l2.next if l2 else None
      return dummy.next

Merge Two Sorted Lists

You are given the heads of two sorted linked lists list1 and list2.

Merge the two lists into one sorted list. The list should be made by splicing together the nodes of the first two lists.

Return the head of the merged linked list.

Example 1

Bash
Input: list1 = [1,2,4], list2 = [1,3,4]
Output: [1,1,2,3,4,4]

Example 2

Bash
Input: list1 = [], list2 = [0]
Output: [0]

Solution Explanation

This problem just requires that you don't don't traverse through entire linked list when one of the linked list already has been traversed. It's already sorted so what we can do is, we can just traverse through until we reach the end of shorted list, then just join the remaining other list.

solution.py
# Definition for singly-linked list.
# class ListNode:
#     def __init__(self, val=0, next=None):
#         self.val = val
#         self.next = nex
class Solution:
    def mergeTwoLists(self, list1: Optional[ListNode], list2: Optional[ListNode]) -> Optional[ListNode]:
      if not list1: return list2
      if not list2: return list1

      dummy = ListNode()
      current = dummy

      while list1 and list2:
        if list1.val < list2.val:
          current.next = list1
          list1 = list1.next
        else:
          current.next = list2
          list2 = list2.next
        current = current.next
      if list1:
        current.next = list1
      if list2:
        current.next = list2
      return dummy.next

AWS AppRunner

Introduction

Containers have been one of the most popular and reliable way to build & deploy services and is only growing more and more. Docker has been most used way to containerize applications. As the demand grows, there are many services that provides deployment using Docker. AWS is not behind either and provides not a few, but actually 17 different ways to run containerized applications. Now it would be really lengthy article if I were to go over about every way we can run a container using AWS and I'm sure you it wouldn't be much useful to anyone either, so I'm just going to keep it short and talk about one particular Service AWS AppRunner. Now before that, let's see some of the popular ways we can run containers on AWS:

  • AWS ECS ( Elastic Container Service ): It also divides into 2 sub-categories as:
  • ECS Fargate
  • ECS with EC2
  • AWS EKS ( Managed Kubernetes Service offering from AWS )
  • AppRunner ( Of course )
  • AWS Lambda ( Lambda is also one of the most popular way of running containers and I love this service as well )
  • AWS LightSail
  • Good old EC2

I know it's getting exhausting list but if it's not for you and you want the entire catalogue, you can head over to this Link to read further.

ECS vs AppRunner

Now this brings to our next question, why do we need to use a new service to run a freaking container if you already know ECS, well....good question. The answer is, you might not however, let's dive in about the pros and cons and then we can decide what's the best fit:

  • AppRunner charges are based on usage and has auto-deploy features, comes with pre-configured load-balancing, HTTPS and auto-scaling. If there are no requests, you'll only be charged for the memory usage ( and even just the memory that app is using while running, not allocated ) no CPU charges, no load-balancer charges.
  • It abstracts away all the complexity which we have to deal with while working with ECS. We also don't have to provision an ELB which saves us a fortune if you're running a small app and are worried about it going viral. The AppRunner will auto-scale based on usage and then scale down once the traffic is gone.

So these are the 2 main reasons AppRunner is favored for small scale applications. Once the application starts hitting consistent traffic, you can consider moving it to ECS which will be really easy considering it also runs containers.

The Code

Now, enough talk, where is the code you might ask, well here's the cloudformation for spinning up a AppRunner.

apprunner.yml
Resources:
  AppRunnerRole:
    Metadata:
      "aws:description": "An IAM Role for App Runner to use on your behalf to pull your image from ECR"
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2008-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - build.apprunner.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/service-role/AWSAppRunnerServicePolicyForECRAccess

  InstanceRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Effect: Allow
            Principal:
              Service: tasks.apprunner.amazonaws.com
            Action: "sts:AssumeRole"
      Policies:
        - PolicyName: AppRunnerServicePolicy
          PolicyDocument:
            Version: "2012-10-17"
            Statement:
              - Effect: Allow
                Action:
                  - "ssm:GetParameters"
                Resource:
                  - !Sub "arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/dev/redis/user"
              - Effect: Allow
                Action:
                  - "dynamodb:PutItem"
                  - "dynamodb:Query"
                  - "dynamodb:GetItem"
                  - "dynamodb:UpdateItem"
                  - "dynamodb:DeleteItem"
                  - "dynamodb:GetRecords"
                Resource:
                  - !GetAtt MyTable.Arn
                  - !Sub "${MyTable.Arn}/*"

  AppRunnerService:
    Type: AWS::AppRunner::Service
    Properties:
      ServiceName: MyService
      InstanceConfiguration:
        Cpu: 256
        Memory: 512
        InstanceRoleArn: !GetAtt InstanceRole.Arn
      HealthCheckConfiguration:
        Protocol: HTTP
        Path: /
      SourceConfiguration:
        AutoDeploymentsEnabled: false
        AuthenticationConfiguration:
          AccessRoleArn: !GetAtt AppRunnerRole.Arn
        ImageRepository:
          ImageRepositoryType: ECR
          ImageIdentifier: !Sub "${AWS::AccountId}.dkr.ecr.${AWS::Region}.amazonaws.com/${self:custom.ecr-repo.myimage:myimagetag"
          ImageConfiguration:
            Port: 8080
            RuntimeEnvironmentVariables:
              - Name: STAGE
                Value: ${self:provider.stagen.${self:provider.stage}}
            RuntimeEnvironmentSecrets:
              - Name: REDIS_USER
                Value: !Sub "arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/dev/redis/user"

Outputs:
  AppRunnerServiceUrl:
    Value: !GetAtt AppRunnerService.ServiceUrl
    Description: The URL of the service

Pre-commit hooks with Husky

Introduction

Maintaining high quality code, consistency and tests are essential for any software projects. While multiple people are working on the project, it's common that some unlinted code or failing test will slip through. To address these challenges, Husky provides a powerful automation tool that ensures to run certain checks and tests before anyone can commit.

Pre-commit hooks and Husky

pre-commit hooks are set of commands that run before the commit is finalized. It ensures that predefined rules such as linting, formatting and running tests are all checked before git commit is registered. Husky empowers developers to automate pre-commit checks, enhancing code quality and productivity.

Set Up

I love working with NestJS Framework and I'll be setting up a project from scratch, then we will enable pre-commit hooks for linting and running tests.

We will bootstrap a new project using nestjs CLI command

nest new my-project

It will initialize a new project and sets up a git.

Now, In order to configure Husky, we need to install some new packages as dev dependencies yarn add -D husky lint-staged eslint eslint-config-prettier

Let’s look at the dependencies and see what they do:

  1. Husky: It helps you define pre commit and pre push hooks. You can specify any commands you wish to execute before one actually commit on the repository.
  2. Eslint: Eslint is a great package build to manage rules across the codebase. We can specify the coding practices we wish to be implemented throughout the project.
  3. Lint-staged: Lint staged helps us do linting before commits when configured with husky. Husky is only there to define which command to run on pre commit. Lint staged will be the actual command for that.
  4. Eslint-config-prettier: Just existing to override eslint configs so that it doesn’t conflict with prettier.

Initialize husky: npx husky-init

Update the file .husky/pre-commit

.husky/pre-commit
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"

yarn format
yarn lint
yarn test

Now, whenever we run git commit, all the defined scripts will run before commit finalizes.

Two Sum

Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to target. You may assume that each input would have exactly one solution, and you may not use the same element twice. You can return the answer in any order.

Example:

Input: nums = [2,7,11,15], target = 9

Output: [0,1]

Explanation: Because nums[0] + nums[1] == 9, we return [0, 1].

Solution Explanation

The approach for solving this problem will be, to start from first number of the list, get a complement value by subtracting it from the target and check where that value belongs in the list.

We can use a hashmap for tracking indexes of the items so we won't have to iterate through the list over and over again as we look for complement value index.

solution.py
def twoSum(nums: List[int], target: int) -> List[int]:
  hashMap = {}
  n = len(nums)

  for i in range(n):
      complement = target - nums[i]
      if complement in hashMap:
          return [hashMap[complement], i]
      hashMap[nums[i]] = i
  return []

That's it for this problem, one of the easiest problem in LeetCode.

AWS S3 Storage Tiers and Lifecycle

AWS S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance, it is one of the most used service and it took the world by storm. It is designed for 99.999999999% (11 9's) of durability. In this article, we'll be going through different storage tiers of s3 and we will be looking at how we can define LifeCycle Configurations to automatically transition objects from one storage tier to another automatically as per our needs in order to optimize cost while maintaining the same level of compliance.

Storage Tiers:

  • S3 Standard ( The most commonly used ) : S3 Standard offers high durability, availability, and performance object storage for frequently accessed data. Because it delivers low latency and high throughput, S3 Standard is appropriate for a wide variety of use cases, including cloud applications, dynamic websites, content distribution, mobile and gaming applications, and big data analytics. This storage tier is most expensive tier in terms of storage.

  • S3 Intelligent Tiering: It is recently introduced feature which lets us to automatically transition objects between standard tier and IA tier. One major thing to note when setting the storage to Intelligent tier is make sure the object you're storing should at-least will be kept for 30 or more days since it charges minimum for 30 days. It is priced same as the standard tier.

  • S3 IA: S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files.

  • S3 1Z IA: This is very similar to S3 IA with one major difference that the stored object will only be kept in One Availability Zone and hence will only have 99.5% overall availability. This is 25% cheaper than S3 IA tier storage.

  • S3 Glacier and Glacier Deep Archieve: These are the lowest cost tier storage with same durability but a longer retrieval time resulting from 2 days to 12 days.

To know the complete pricing and features of those storage tiers, visit official documentation

LifeCycleConfiguration

YAML
Resources:
  S3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: test-bucket
      VersioningConfiguration:
        Status: Suspended
      LifecycleConfiguration:
        Rules:
          - Status: Enabled
            Transitions:
              - TransitionInDays: 100
                StorageClass: STANDARD_IA
              - TransitionInDays: 120
                StorageClass: GLACIER
            Id: TransitionRules

The above code snippet creates a rule which will transition all the objects as follows:

  • From Standard Tier to Standard IA after 30 days
  • From Standard IA tier to Glacier after 120 days

Serverless Cloudformation Parameters

Introduction

While working with AWS, we all love creating infra with code, be it terraform, cdk or cloudformation. Me personally, I love cloudformation over any thing else. Also we use parameters to make the cloudformation more dynamic, be it getting different values for different environments or having to use different env or having to use different config values.

In order to deploy our beautifully crafted cloudformation, we have 2 tools at our hand, we can use either AWS SAM or Serverless Framework. Having worked with both, I find Serverless Framework light years ahead of AWS SAM. One core reason I have so much dislike for AWS SAM is, we can't break apart our long cloudformation into different files unless we want them as nested stack.

Serverless Framework

If we talk about Serverless, well, I've been using this tool for over 5 years and I have so much love for this framework. It addresses every pain points that SAM has left us with while supporting pure cloudformation code and providing easy way of splitting cloudformation snippets into multiple files.

If your stack grows huge, like really really huge, Serverless even provides Serverless Compose plugin, which not only manages to structure big stacks really nicely, but also provides following super cool features:

  • Parallel deployment for multiple services
  • Capability to share outputs from one stack(service) to another
  • Running commands across multiple stacks

Now these are all nice and good but about Cloudformation Parameters which is our main concern here, well, serverless supports that as well. Let's look into how we can make use of that.

Using Parameters in Serverless

s3.yml
Parameters:
  BucketName:
    Type: String
    Default: test-bucket
  Stage:
    Type: String
    Default: dev
    AllowedValues: [dev, uat, prod]

Resources:
  MyS3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub "${BucketName}-${Stage}"
      AccessControl: Private
serverless.yml
service: s3-bucket-param
frameworkVersion: "3"

provider:
  name: aws
  stage: dev
  region: us-east-1
  stackParameters:
    - ParameterKey: BucketName
      ParameterValue: ${param:BucketName}
    - ParameterKey: Stage
      ParameterValue: ${param:Stage}

resources:
  - ${file(s3.yml)}

This is how we use the Cloudformation Parameters in Serverless. We define all the Parameters that we require under stackParameters and either we can provide the value directly or through the serverless parameters. In order to supply the value and deploy the stack, we do the following command:

serverless deploy --param "BucketName=my-test-bucet" --param "Stage=dev"