Thursday, April 27, 2023

Github Actions to Deploy Microservices to Kubernetes on AWS EKS and Oracle Cloud Infrastructure

 Github Actions to Deploy Microservices to Kubernetes

GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production.

GitHub Actions goes beyond just DevOps and lets you run workflows when other events happen in your repository. For example, you can run a workflow to automatically add the appropriate labels whenever someone creates a new issue in your repository.

Code snippet

name: Deploy Spring Boot to Kubernetes

 

on:

  push:

    branches:

      - master

 

jobs:

  build:

    runs-on: ubuntu-latest

    steps:

      - uses: actions/checkout@v2

      - name: Set up Java

        uses: actions/setup-java@v1

        with:

          java-version: 11

      - name: Build with Maven

        run: mvn clean install

 

  deploy:

    needs: build

    runs-on: ubuntu-latest

    steps:

      - uses: actions/checkout@v2

      - name: Set up Kubernetes

        uses: actions/setup-kubernetes@v1

        with:

          kubeconfig: ${{ secrets.KUBECONFIG }}

      - name: Deploy to Kubernetes

        run: kubectl apply -f deployment.yaml

 

This pipeline will first checkout the code from the repository, then set up Java and build the application with Maven. Once the application is built, it will deploy it to Kubernetes using the kubectl apply command.

Here is a breakdown of each step in the pipeline:

  • on: push: branches: master - This event listener will trigger the pipeline when a new commit is pushed to the master branch.
  • jobs: build - This job will build the Spring Boot application.
  • steps: - These steps are executed in order to build the application.
    • uses: actions/checkout@v2 - This step checks out the code from the repository.
    • name: Set up Java - This step sets up Java on the runner.
    • run: mvn clean install - This step builds the application with Maven.
  • jobs: deploy - This job will deploy the Spring Boot application to Kubernetes.
  • needs: build - This job depends on the build job. This means that the deploy job will not start until the build job has completed successfully.
  • steps: - These steps are executed in order to deploy the application to Kubernetes.
    • uses: actions/checkout@v2 - This step checks out the code from the repository.
    • name: Set up Kubernetes - This step sets up Kubernetes on the runner.
    • with: kubeconfig: ${{ secrets.KUBECONFIG }} - This step sets the Kubernetes configuration file as a secret.
    • name: Deploy to Kubernetes - This step deploys the application to Kubernetes using the kubectl apply command.

Deploy Microservice in Oracle Cloud Infrastructure

 

Deploy Microservice in Oracle Cloud Infrastructure

1.   Create a Compute Instance

First, you need to create a Compute Instance in Oracle Cloud Infrastructure. This is a virtual machine that will host your Spring Boot application.

To create a Compute Instance, follow these steps:

1.   Go to the Compute Instances page in the Oracle Cloud Infrastructure Console.

2.   Click Create Instance.

3.   In the Create Instance dialog, select the Operating System and Image for your Compute Instance.

4.   Click Create.

It will take a few minutes for your Compute Instance to be created.

2.   Install Java and Maven

Once your Compute Instance is created, you need to install Java and Maven. Java is the programming language that Spring Boot is written in, and Maven is a build tool that is used to build and deploy Spring Boot applications.

To install Java and Maven, follow these steps:

1.   SSH into your Compute Instance.

2.   Run the following command to install Java:

Code snippet

sudo apt-get install openjdk-11-jdk

 

3.   Run the following command to install Maven:

Code snippet

sudo apt-get install maven

 

3.   Create a Spring Boot Application

Now that you have Java and Maven installed, you can create a Spring Boot application.

To create a Spring Boot application, follow these steps:

1.   Create a new directory for your Spring Boot application.

2.   In the directory, run the following command to create a new Spring Boot project:

mvn archetype:generate -DgroupId=com.example -DartifactId=my-spring-boot-application -DarchetypeArtifactId=maven-archetype-quickstart

This will create a new Spring Boot project with the name my-spring-boot-application.

4.   Build and Deploy the Application

Once you have created a Spring Boot application, you can build and deploy it.

To build and deploy the application, follow these steps:

1.   In the directory of your Spring Boot application, run the following command to build the application:

mvn clean install

 



2.   Run the following command to deploy the application to Oracle Cloud Infrastructure:

cf push my-spring-boot-application

 

 

This will deploy your Spring Boot application to Oracle Cloud Infrastructure.

5.   Access the Application

Once your Spring Boot application is deployed, you can access it by going to the following URL:

https://my-spring-boot-application.[region].oci.oraclecloud.com

 

 

For example, if your application is deployed in the us-central1 region, the URL would be:

https://my-spring-boot-application.us-central1.oci.oraclecloud.com

 

 

You can now access your Spring Boot application in the Oracle Cloud Infrastructure Console.

 

Friday, April 21, 2023

Title: How to Deploy Kubernetes with Helm Charts

 

Title: How to Deploy Kubernetes with Helm Charts

Introduction:

Kubernetes is a powerful container orchestration platform that can be used to deploy

and manage containerized applications. Helm is a package manager for Kubernetes

that makes it easy to deploy and manage Helm charts, which are packages that contain

all the Kubernetes resources needed to deploy an application.

In this blog post, we will show you how to deploy Kubernetes with Helm charts.

We will start by creating a Helm chart for a simple application, and then we will deploy

the application to Kubernetes.

Prerequisites:

· Kubernetes cluster

· Helm

Creating a Helm Chart:

To create a Helm chart, we will use the following command:

helm create my-app

This will create a new directory called my-app that contains the Helm chart for

our application.

The my-app directory contains the following files:

· Chart.yaml: This file contains metadata about the Helm chart, 

· such as the name, version, and description.

· values.yaml: This file contains default values for the Helm chart's parameters.

· templates: This directory contains the Kubernetes resources that will be

· deployed when the Helm chart is installed.

 

Deploying the Helm Chart:

To deploy the Helm chart, we will use the following command:

helm install my-app

This will deploy the Helm chart to Kubernetes.

Once the Helm chart has been deployed, you can verify that the application

is running by using the following command:

kubectl get pods

This command will list all the pods that are running in your Kubernetes cluster.

Conclusion:

In this blog post, we showed you how to deploy Kubernetes with Helm charts.

We started by creating a Helm chart for a simple application, and then

we deployed the application to Kubernetes.

Monday, April 3, 2023

Data refresh in Quick Sight

 Amazon QuickSight does not have a direct mechanism to be notified when underlying data sources (such as an S3 file, RDS database, or Redshift table) are refreshed. However, the recommended approach is to use an AWS Lambda function or an AWS Step Functions workflow in combination with the QuickSight API to trigger dataset refreshes programmatically.

Suggested Approach:

  1. Use QuickSight API to Trigger a Dataset Refresh: After your data is updated (e.g., when an ETL job completes or an S3 file is updated), invoke the QuickSight API to refresh the dataset. This can be done via the AWS SDK or using an AWS Lambda function.

  2. Event-Driven Automation with CloudWatch or Lambda: You can automate the refresh process using AWS CloudWatch Events, Amazon S3 Events, or database triggers to detect data changes and initiate a refresh using the QuickSight API.

  3. QuickSight Refresh Programmatically: To programmatically notify QuickSight, you can use the UpdateDataSet API or CreateIngestion API to refresh the datasets.

Implementation Options:

1. CreateIngestion API to Refresh Dataset:

This approach will trigger a new ingestion (data load) in QuickSight. You can automate this API call using AWS Lambda, Step Functions, or another AWS service.


More details can be found here 

Python Example:

python
import boto3 import datetime # Replace with your actual AWS Region and Dataset ID AWS_REGION = 'us-east-1' DATASET_ID = 'your-dataset-id' AWS_ACCOUNT_ID = 'your-account-id' QUICKSIGHT_ROLE_NAME = 'quicksight' # Initialize QuickSight client quicksight_client = boto3.client('quicksight', region_name=AWS_REGION) def refresh_quicksight_dataset(): response = quicksight_client.create_ingestion( DataSetId=DATASET_ID, IngestionId='Ingestion_' + datetime.datetime.now().strftime("%Y%m%d%H%M%S"), AwsAccountId=AWS_ACCOUNT_ID ) print(f"Ingestion Response: {response}") return response # Call the function to trigger ingestion refresh_quicksight_dataset()

This function triggers a dataset refresh for the specified dataset ID. You can automate the execution of this script using Lambda functions or scheduled events.

2. Automation using AWS Lambda:

If the data refresh is triggered by another event (e.g., S3 file update, Glue job completion, or RDS data change), you can use a Lambda function with an S3 event or a CloudWatch Event Rule to call the create_ingestion API.

Example Lambda Function Configuration:

  • Trigger: Set up S3 Event Notification (PUT) or a CloudWatch Event Rule (for Glue job completion).
  • Action: Call the create_ingestion API to refresh the QuickSight dataset.

3. Event-Driven Approach using Step Functions:

For complex workflows (e.g., multiple datasets need to be refreshed sequentially), you can use AWS Step Functions to define a state machine that:

  1. Triggers data ingestion using the QuickSight API.
  2. Checks for the status of the ingestion.
  3. Updates dashboards once ingestion is complete.

Monitoring and Notifications:

  • Use CloudWatch Alarms to monitor the status of the ingestions and send notifications if ingestion fails.
  • Automate retry mechanisms using Lambda or Step Functions.

Key Considerations:

  • Dataset IDs: Ensure that you have the dataset ID and account ID.
  • Permissions: The AWS Identity needs appropriate permissions (quicksight:CreateIngestion, quicksight:UpdateDataSetPermissions, etc.) to manage dataset ingestion.
  • Ingestion Limits: Each dataset can have only one active ingestion at a time, so coordinate to avoid conflicts.

By combining these approaches, you can effectively notify and trigger QuickSight to reflect changes in your underlying data, ensuring that your visualizations remain up-to-date.

Let me know if you'd like to see this setup in more detail or need guidance for a specific use case!

Amazon Bedrock and AWS Rekognition comparison for Image Recognition

 Both Amazon Bedrock and AWS Rekognition are services provided by AWS, but they cater to different use cases, especially when it comes to ...