Friday, October 18, 2024

Amazon Bedrock and AWS Rekognition comparison for Image Recognition

 Both Amazon Bedrock and AWS Rekognition are services provided by AWS, but they cater to different use cases, especially when it comes to handling tasks related to image recognition. Here's a detailed comparison of the two services:

Amazon Bedrock

Amazon Bedrock is a service designed to help developers build and deploy generative AI models (language models). It's not specifically designed for image recognition but more for handling text-based tasks, natural language understanding, and generation. However, certain generative models accessible via Bedrock, like multimodal models, can support tasks involving image generation or image-related queries.

AWS Rekognition

AWS Rekognition, on the other hand, is a dedicated image and video analysis service. It uses deep learning models to analyze images and videos for object detection, facial recognition, image classification, scene detection, and more. AWS Rekognition is designed specifically for image and video recognition and is widely used for tasks related to security, compliance, media, and more.

When to Use AWS Rekognition vs. Bedrock for Image-Related Tasks

AWS Rekognition: When and Why to Use

Use Case: Image and video analysis, object detection, face recognition, celebrity detection, text in image (OCR), and moderation (e.g., identifying inappropriate content).

Key Features of AWS Rekognition

  • Image & Video Analysis: Detect objects, people, text, and activities in images and videos.
  • Facial Analysis: Recognize faces in images, detect emotions, and analyze facial attributes.
  • OCR (Optical Character Recognition): Detect text in images and extract it for further use.
  • Content Moderation: Automatically detect inappropriate or unsafe content in images and videos.
  • Face Comparison: Compare a face in an image with a reference image.
  • Celebrity Recognition: Recognize well-known celebrities in images and videos.

Pros of AWS Rekognition

  1. Specialized for Image/Video: Tailored for image and video recognition tasks, making it very efficient in these areas.
  2. High Accuracy for Object and Facial Recognition: Optimized models with pre-built accuracy for detecting objects, people, and faces in images.
  3. Real-time Analysis: Can process images and videos in real time.
  4. Pre-trained Models: No need to train models; out-of-the-box functionality for common tasks.
  5. Scalable: It can scale easily based on the number of images or videos you need to process.

Cons of AWS Rekognition

  1. Limited to Predefined Use Cases: The models are pre-trained for specific tasks (e.g., facial recognition, object detection). Customization options for very specific or niche needs are limited.
  2. Cost: Depending on the volume of images and videos processed, costs can add up, especially if dealing with large datasets or real-time video streams.
  3. Data Sensitivity: Sensitive use cases involving biometric data (e.g., facial recognition) may face compliance or privacy concerns in some regions.

Ideal Use Case for AWS Rekognition

  • Security systems for facial recognition.
  • Automating image or video content moderation.
  • Detecting objects, activities, and people in surveillance videos.
  • Media and entertainment industry for tagging or categorizing video content.
  • Extracting text from scanned documents or images (OCR).

Amazon Bedrock: When and Why to Use

Use Case: Text-related tasks, multimodal interactions (where some language models support limited image-related tasks), but Bedrock is not primarily designed for image recognition.

Key Features of Amazon Bedrock

  • Generative AI: Use large language models (LLMs) for tasks like text generation, summarization, or question answering.
  • Multimodal Models: Some models may support tasks that involve both text and image analysis, but they are not specialized for pure image recognition.
  • Foundation Models: Provides access to a variety of pre-trained foundation models, which can be customized and used in specific domains like text, images (with generative models), and more.

Pros of Amazon Bedrock

  1. Generative AI Capabilities: Excellent for natural language tasks, from summarization to conversation and writing.
  2. Customizability: Models can be fine-tuned and adapted to specific business needs.
  3. Multimodal Integration: If using AI models that combine text with limited image features (e.g., interpreting image metadata, describing images), Bedrock could offer flexibility.

Cons of Amazon Bedrock

  1. Not Primarily for Image Recognition: Unlike AWS Rekognition, Bedrock doesn’t focus on analyzing and recognizing objects in images or video footage.
  2. Learning Curve for Customization: Customizing foundation models for specific tasks requires expertise.
  3. Higher Cost for Fine-tuning: Customizing models can be resource-intensive compared to using pre-trained image recognition services like Rekognition.

Ideal Use Case for Bedrock

  • Text-based tasks like natural language generation, summarization, or answering questions.
  • Building chatbots or conversational agents.
  • Tasks that involve interpreting textual descriptions of images or multimodal interactions.

Comparison: Pros and Cons for Image Recognition

FeatureAWS RekognitionAmazon Bedrock
Image RecognitionExcellent for image and video recognition (objects, faces, activities)Limited image-related features (mainly for multimodal use cases)
Real-time ProcessingYes, supports real-time video and image analysisNot designed for real-time image recognition
CustomizabilityPre-built models with limited customizationHighly customizable for text tasks, less relevant for images
ScalabilityHighly scalable for processing large image and video datasetsScalable for language models; not ideal for scaling image tasks
Ease of UseEasy to implement with pre-trained models for common use casesRequires customization for non-text tasks
CostCosts may escalate with large datasets or real-time processing needsCosts associated with fine-tuning models
Primary Use CaseObject, face detection, OCR, video analysisText generation, multimodal tasks (image and text)
Support for Custom ModelsPre-built for specific use cases (e.g., facial recognition, object detection)Requires fine-tuning models for specific tasks (primarily language-based)

When to Choose AWS Rekognition

  • When the focus is on image and video analysis tasks like object detection, face recognition, and moderation.
  • For real-time or large-scale image/video processing.
  • If you want out-of-the-box functionality for common image recognition tasks without needing to train models.
  • If working in domains like security, media, and compliance where specific image-related tasks are critical.

When to Choose Amazon Bedrock

  • When your focus is on text-based tasks and generative AI.
  • If working with multimodal models where a combination of text and image-related tasks (e.g., generating text from image metadata) is needed.
  • If you need to customize models deeply for domain-specific language tasks.

Glue job in Python that connects to an on-premise Oracle database, creates a fixed-length file, and writes it to an S3 bucket

To write a Glue job in Python that connects to an on-premise Oracle database, creates a fixed-length file, and writes it to an S3 bucket, you would need to:

  1. Set up a connection to Oracle Database using JDBC.
  2. Retrieve the data from the Oracle database.
  3. Format the data into a fixed-length format.
  4. Write the formatted data to an S3 bucket.

Here’s an outline of a Glue job script to achieve this:

Prerequisites:

  • Ensure that AWS Glue has network access to your on-premise Oracle Database (usually via AWS Direct Connect or VPN).
  • Add the Oracle JDBC driver to your Glue job (by uploading it to S3 and referencing it in the job).
  • Set up IAM roles and S3 permissions to write to the bucket.

Python Glue Job Script:

import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job import boto3 import cx_Oracle import os # Initialize Glue context and job args = getResolvedOptions(sys.argv, ['JOB_NAME', 'oracle_jdbc_url', 'oracle_username', 'oracle_password', 's3_output_path']) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args['JOB_NAME'], args) # Oracle Database connection parameters jdbc_url = args['oracle_jdbc_url'] oracle_username = args['oracle_username'] oracle_password = args['oracle_password'] # S3 output path s3_output_path = args['s3_output_path'] # Oracle query (modify this query as per your requirement) query = "SELECT column1, column2, column3 FROM your_table" # Fetching data from Oracle DB using JDBC df = (spark.read.format("jdbc") .option("url", jdbc_url) .option("dbtable", f"({query}) as data") .option("user", oracle_username) .option("password", oracle_password) .option("driver", "oracle.jdbc.driver.OracleDriver") .load()) # Convert DataFrame to an RDD to process fixed-length formatting def format_row(row): column1 = str(row['column1']).ljust(20) # Adjust the length as per requirement column2 = str(row['column2']).ljust(30) # Adjust the length as per requirement column3 = str(row['column3']).ljust(50) # Adjust the length as per requirement return column1 + column2 + column3 fixed_length_rdd = df.rdd.map(format_row) # Create a file in S3 in the fixed-length format output_file_path = "/tmp/output_fixed_length_file.txt" with open(output_file_path, "w") as f: for line in fixed_length_rdd.collect(): f.write(line + "\n") # Uploading the file to S3 s3 = boto3.client('s3') bucket_name = s3_output_path.replace("s3://", "").split("/")[0] s3_key = "/".join(s3_output_path.replace("s3://", "").split("/")[1:]) s3.upload_file(output_file_path, bucket_name, s3_key) # Cleanup os.remove(output_file_path) # Mark the job as complete job.commit()

Explanation:

  1. Oracle JDBC connection: The script connects to your Oracle Database using the JDBC driver and retrieves data based on the query.
  2. Fixed-length formatting: The data is converted into fixed-length format by adjusting the length of each column using the ljust() method.
  3. File creation: The formatted data is written into a text file on the local disk.
  4. S3 upload: The file is uploaded to the specified S3 bucket using Boto3.
  5. Cleanup: Temporary files are removed after upload.

Glue Job Parameters:

You can pass the following arguments when you run the Glue job:

  • oracle_jdbc_url: The JDBC URL for your Oracle Database (e.g., jdbc:oracle:thin:@your_host:1521:your_service_name).
  • oracle_username: Oracle database username.
  • oracle_password: Oracle database password.
  • s3_output_path: The S3 path where you want to store the fixed-length file (e.g., s3://your-bucket/path/to/file.txt).

Wednesday, October 2, 2024

Maven script to deploy MDS (Metadata Services) artifacts in a SOA 12.2.1.4

To create a Maven script to deploy MDS (Metadata Services) artifacts in a SOA 12.2.1.4 environment, you need to use the oracle-maven-sync configuration and Oracle's oracle-maven-plugin to manage the deployment. Below is a sample pom.xml setup and a script to achieve this.

Click here for the above step which is a prerequisite

Prerequisites

  1. Make sure the Oracle SOA 12.2.1.4 Maven plugin is installed in your local repository or is accessible through a corporate repository.
  2. Your environment should have Oracle WebLogic and SOA Suite 12.2.1.4 configured properly.
  3. Oracle MDS repository should be set up and accessible.

Maven pom.xml Configuration

Here’s a sample pom.xml file for deploying an MDS artifact using Maven:

xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>soa-mds-deployment</artifactId> <version>1.0-SNAPSHOT</version> <packaging>pom</packaging> <properties> <!-- Update with your SOA and WebLogic version --> <oracle.soa.version>12.2.1.4</oracle.soa.version> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> </properties> <dependencies> <dependency> <groupId>oracle.soa.common</groupId> <artifactId>oracle-soa-maven-plugin</artifactId> <version>${oracle.soa.version}</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>oracle.soa.common</groupId> <artifactId>oracle-soa-maven-plugin</artifactId> <version>${oracle.soa.version}</version> <configuration> <!-- Configuration for the SOA MDS deployment --> <action>deploy</action> <repositoryName>mds-soa</repositoryName> <sourcePath>src/main/resources/mds/</sourcePath> <serverURL>t3://<admin-server-host>:<admin-server-port></serverURL> <username>weblogic</username> <password>your_weblogic_password</password> <partition>soa-infra</partition> </configuration> </plugin> </plugins> </build> <profiles> <profile> <id>soa-mds-deploy</id> <build> <plugins> <plugin> <groupId>oracle.soa.common</groupId> <artifactId>oracle-soa-maven-plugin</artifactId> <executions> <execution> <goals> <goal>deploy</goal> </goals> </execution> </executions> <configuration> <!-- MDS repository configuration --> <repositoryName>mds-soa</repositoryName> <serverURL>t3://<admin-server-host>:<admin-server-port></serverURL> <username>weblogic</username> <password>your_weblogic_password</password> <partition>soa-infra</partition> <sourcePath>src/main/resources/mds/</sourcePath> </configuration> </plugin> </plugins> </build> </profile> </profiles> </project>

Folder Structure

Ensure your project directory is structured like this:

css
. ├── pom.xml └── src └── main └── resources └── mds └── your_mds_artifacts

Place your MDS artifacts (e.g., .xml or .wsdl files) in the src/main/resources/mds/ folder.

Maven Command

To deploy the MDS artifacts, use the following command:

bash
mvn clean install -Psoa-mds-deploy

Key Points

  1. repositoryName: The MDS repository name (mds-soa) should match the target repository configured in your SOA environment.
  2. serverURL: Replace <admin-server-host> and <admin-server-port> with your WebLogic Admin server’s host and port.
  3. username/password: Use the WebLogic Admin credentials to authenticate the deployment.
  4. sourcePath: Specify the folder containing your MDS artifacts.

This script configures a Maven build to deploy MDS artifacts to your SOA 12.2.1.4 environment. If you encounter specific errors during deployment, check the logs on the Admin server to ensure correct configurations.

Installation of Oracle SOA 12.2.1.4 Maven plugin

 To install the Oracle SOA 12.2.1.4 Maven plugin, follow the steps below. The Oracle SOA Maven plugin is not hosted in a public repository like Maven Central, so it needs to be installed manually from the Oracle installation directory or configured in a local repository.

Step 1: Locate the Oracle SOA Maven Plugin

The Oracle SOA Suite installation directory contains a script that generates a pom.xml file and installs the necessary SOA Maven artifacts into your local Maven repository. This is usually found in your Oracle Middleware home directory.

The typical path to the Maven sync script is:

ruby

<ORACLE_HOME>/oracle_common/plugins/maven/com/oracle/maven/oracle-maven-sync.jar

For Example on my Server this file is located in following directory





C:\Oracle\Middleware\Oracle_Home\oracle_common\plugins\maven\com\oracle\maven\oracle-maven-sync\12.2.1

Step 2: Execute the oracle-maven-sync Script

  1. Open a terminal or command prompt.

  2. Navigate to the directory containing oracle-maven-sync.jar.

    bash
    cd C:\Oracle\Middleware\Oracle_Home\oracle_common\plugins\maven\com\oracle\maven\oracle-maven-sync\12.2.1
  3. Run the Maven sync command to install the SOA Maven plugin and dependencies:

    bash
    mvn install:install-file -DpomFile=oracle-maven-sync.xml -Dfile=oracle-maven-sync.jar

    Alternatively, you can use the oracle-maven-sync script:

    bash
    java -jar oracle-maven-sync.jar -f

This command installs all the necessary SOA artifacts, including the oracle-soa-maven-plugin into your local Maven repository (~/.m2).

Step 3: Verify Installation

After running the command, verify that the artifacts have been installed in your local Maven repository. Check under the com/oracle/soa/oracle-soa-maven-plugin directory inside the .m2 folder:

ruby
~/.m2/repository/com/oracle/soa/oracle-soa-maven-plugin

You should see subdirectories like 12.2.1.4, containing the plugin JAR files and associated pom.xml files.

Step 4: Update the Maven pom.xml

Once the plugin is installed locally, update your pom.xml to reference it:

xml

<plugin> <groupId>com.oracle.soa</groupId> <artifactId>oracle-soa-maven-plugin</artifactId> <version>12.2.1.4</version> </plugin>

Additional Configuration (Optional)

If you need to use this plugin in a shared environment (e.g., CI/CD pipeline or team development), consider deploying it to a shared Maven repository like Nexus or Artifactory. Here’s how to do that:

  1. Install the plugin to your shared repository:

    bash
    mvn deploy:deploy-file -DgroupId=com.oracle.soa \ -DartifactId=oracle-soa-maven-plugin \ -Dversion=12.2.1.4 \ -Dpackaging=jar \ -Dfile=<ORACLE_HOME>/soa/plugins/maven/oracle-soa-maven-plugin-12.2.1.4.jar \ -DpomFile=<ORACLE_HOME>/soa/plugins/maven/oracle-soa-maven-plugin-12.2.1.4.pom \ -DrepositoryId=<repository_id> \ -Durl=<repository_url>
  2. Configure your pom.xml to point to the shared repository:

xml

<repositories> <repository> <id>shared-repo</id> <url>http://<repository_url>/repository/maven-public/</url> </repository> </repositories>

Healthcare Information Extraction Using Amazon Bedrock using advanced NLP with Titan or Claude Models

Healthcare Information Extraction Using Amazon Bedrock

Client: Leading Healthcare Provider

Project Overview:
This project was developed for a healthcare client to automate the extraction of critical patient information from unstructured medical records using advanced Natural Language Processing (NLP) capabilities offered by Amazon Bedrock. The primary objective was to streamline the processing of patient case narratives, reducing the manual effort needed to identify key data points such as patient demographics, symptoms, medical history, medications, and recommended treatments.

Key Features Implemented:

  1. Automated Text Analysis: Utilized Amazon Bedrock's NLP models to analyze healthcare use cases, automatically identifying and extracting relevant clinical details.
  2. Customizable Information Extraction: Implemented the solution to support specific healthcare entities (e.g., patient name, age, symptoms, medications) using customizable extraction models.
  3. Seamless Integration: Integrated with existing systems using Java-based AWS SDK, enabling the healthcare provider to leverage the extracted information for clinical decision support and reporting.
  4. Real-time Data Processing: Enabled the client to process patient case records in real-time, accelerating the review of patient documentation and improving overall efficiency.

Amazon Bedrock provides access to foundational models for Natural Language Processing (NLP), which can be used for various applications, such as extracting relevant information from text documents. Below is the implementation design with Amazon Bedrock with Java to analyze patient healthcare use cases. For this example, I will illustrate how to structure a solution that utilizes AWS SDK for Java to interact with Bedrock and apply language models like Titan or Claude (depending on the model availability).

Prerequisites

  1. AWS SDK for Java: Make sure you have included the necessary dependencies for interacting with Amazon Bedrock.
  2. Amazon Bedrock Access: Ensure that your AWS credentials and permissions are configured to access Amazon Bedrock.
  3. Java 11 or Higher: Recommended to use a supported version of Java.

Step 1: Include Maven Dependencies

First, add the necessary dependencies in your pom.xml to include the AWS SDK for Amazon Bedrock.

xml

<dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>bedrock</artifactId> <version>2.20.0</version> </dependency>

Step 2: Set Up AWS SDK Client

Next, create a client to connect to Amazon Bedrock using the BedrockClient provided by the AWS SDK.

java code
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider; import software.amazon.awssdk.regions.Region; import software.amazon.awssdk.services.bedrock.BedrockClient; import software.amazon.awssdk.services.bedrock.model.*; public class BedrockHelper { public static BedrockClient createBedrockClient() { return BedrockClient.builder() .region(Region.US_EAST_1) // Set your AWS region .credentialsProvider(ProfileCredentialsProvider.create()) .build(); } }

Step 3: Define a Method to Extract Information

Create a method that will interact with Amazon Bedrock, pass the healthcare use case text, and get relevant information back.

java

import software.amazon.awssdk.services.bedrock.model.InvokeModelRequest; import software.amazon.awssdk.services.bedrock.model.InvokeModelResponse; public class HealthcareUseCaseProcessor { private BedrockClient bedrockClient; public HealthcareUseCaseProcessor(BedrockClient bedrockClient) { this.bedrockClient = bedrockClient; } public String extractRelevantInformation(String useCaseText) { InvokeModelRequest request = InvokeModelRequest.builder() .modelId("titan-chat-b7") // Replace with the relevant model ID .body("{ \"text\": \"" + useCaseText + "\" }") .build(); InvokeModelResponse response = bedrockClient.invokeModel(request); return response.body(); // The response will contain the extracted information } }

Step 4: Analyze Patient Healthcare Use Cases

This example uses a test healthcare use case to demonstrate the interaction.

java
public class BedrockApp { public static void main(String[] args) { BedrockClient bedrockClient = BedrockHelper.createBedrockClient(); HealthcareUseCaseProcessor processor = new HealthcareUseCaseProcessor(bedrockClient); // Sample healthcare use case text String healthcareUseCase = "Patient John Doe, aged 45, reported symptoms of chest pain and dizziness. " + "Medical history includes hypertension and type 2 diabetes. " + "Prescribed medication includes Metformin and Atenolol. " + "Referred for an ECG and follow-up with a cardiologist."; // Extract relevant information String extractedInfo = processor.extractRelevantInformation(healthcareUseCase); // Print the extracted information System.out.println("Extracted Information: " + extractedInfo); } }

Step 5: Handling the Extracted Information

The extractRelevantInformation method uses Amazon Bedrock’s language models to identify key data points. Depending on the model and the request format, you may want to parse and analyze the output JSON.

For example, if the output JSON has a specific structure, you can use libraries like Jackson or Gson to parse the data:

java
import com.fasterxml.jackson.databind.JsonNode; import com.fasterxml.jackson.databind.ObjectMapper; public void processResponse(String jsonResponse) { ObjectMapper mapper = new ObjectMapper(); try { JsonNode rootNode = mapper.readTree(jsonResponse); JsonNode patientName = rootNode.get("patient_name"); JsonNode age = rootNode.get("age"); System.out.println("Patient Name: " + patientName.asText()); System.out.println("Age: " + age.asText()); } catch (Exception e) { e.printStackTrace(); } }

Points to Consider

  1. Model Selection: Choose the correct model that suits your use case, such as those specialized in entity extraction or text classification.
  2. Region Availability: Amazon Bedrock is available in specific regions. Make sure you are using the right region.
  3. API Limits: Be aware of any rate limits or quotas for invoking models on Amazon Bedrock.

 

Monday, September 2, 2024

Dockerfile and Steps to build Docker image for your Spring Boot project

Dockerfile and Steps to build Docker image for your Spring Boot project


To build a Docker image for your Spring Boot project, follow these steps:


 Prerequisites

1. Docker installed on your machine.

2. A built Spring Boot JAR file in your `target` directory (e.g., `target/demo-0.0.1-SNAPSHOT.jar`).

3. A Dockerfile in the root directory of your project (see the Dockerfile example below).


 Step-by-Step Instructions


1. Navigate to the Root Directory of Your Project

   Open a terminal and go to the root directory where your `Dockerfile` is located:


   ```bash

   cd /path/to/your/project

   ```


2. Build the Spring Boot JAR

   Make sure that the Spring Boot JAR file is available in the `target` directory. If not, build it using Maven:


   ```bash

   mvn clean package

   ```


   After running this command, a JAR file will be created in the `target` folder (e.g., `target/demo-0.0.1-SNAPSHOT.jar`).


3. Build the Docker Image

   Use the `docker build` command to build the Docker image:


   ```bash

   docker build -t springboot-app .

   ```


   - `-t springboot-app`: The `-t` flag is used to name the image. Here, `springboot-app` is the name of your Docker image.

   - `.`: The period (`.`) at the end specifies the current directory as the build context, where the Dockerfile is located.


4. Verify the Docker Image

   After the build is complete, verify that the image was created using the `docker images` command:


   ```bash

   docker images

   ```


   You should see an entry similar to the following:


   ```

   REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

   springboot-app      latest              123abc456def        5 minutes ago       500MB

   ```


5. Run the Docker Container

   Once the Docker image is built, you can run a container using the `docker run` command:


   ```bash

   docker run -p 8080:8080 springboot-app

   ```


   - `-p 8080:8080`: Maps port 8080 on your local machine to port 8080 in the Docker container.

   - `springboot-app`: The name of the Docker image you built.


6. Access Your Spring Boot Application

   Open a web browser and navigate to:


   ```

   http://localhost:8080

   ```


   You should see your Spring Boot application running!


 Additional Tips


- Tagging the Image with Versions: You can tag the image with a specific version using `:version`:


  ```bash

  docker build -t springboot-app:v1.0 .

  ```


- Running with Environment Variables: You can pass environment variables to the container using the `-e` flag:


  ```bash

  docker run -p 8080:8080 -e "SPRING_PROFILES_ACTIVE=prod" springboot-app

  ```


- Running the Container in Detached Mode: Use the `-d` flag to run the container in detached mode:


  ```bash

  docker run -d -p 8080:8080 springboot-app

  ```

Here's a `Dockerfile` using `openjdk:17` as the base image and including environment variables configuration.


Dockerfile Contents

```dockerfile

# Use the official OpenJDK 17 image

FROM openjdk:17-jdk-slim


# Set the working directory inside the container

WORKDIR /app


# Copy the Spring Boot JAR file into the container

COPY target/*.jar app.jar


# Expose the port that the Spring Boot application runs on (optional, defaults to 8080)

EXPOSE 8080


# Set environment variables (optional: add your specific environment variables here)

ENV SPRING_PROFILES_ACTIVE=prod \

    JAVA_OPTS="-Xms256m -Xmx512m" \

    APP_NAME="springboot-app"


# Run the Spring Boot application using the environment variables

ENTRYPOINT ["sh", "-c", "java ${JAVA_OPTS} -jar app.jar"]

```


 Key Components Explained

1. `FROM openjdk:17-jdk-slim`:

   - Uses the official OpenJDK 17 image (`slim` variant) for a lightweight build.

   

2. `WORKDIR /app`:

   - Sets the working directory inside the container to `/app`.


3. `COPY target/*.jar app.jar`:

   - Copies the built Spring Boot JAR file (`*.jar`) from the `target` directory into the `/app` directory inside the container, renaming it to `app.jar`.


4. `EXPOSE 8080`:

   - Opens port `8080` on the container to allow external traffic to reach the application. This is optional but helps document the expected port.


5. `ENV ...`:

   - Adds environment variables to the Docker image.

   - `SPRING_PROFILES_ACTIVE`: Sets the Spring Boot profile (e.g., `dev`, `test`, `prod`).

   - `JAVA_OPTS`: Allows you to pass JVM options, such as memory settings or GC options.

   - `APP_NAME`: A custom environment variable to hold the name of the application.


6. `ENTRYPOINT ["sh", "-c", "java ${JAVA_OPTS} -jar app.jar"]`:

   - Runs the JAR file using `java -jar` and includes the specified JVM options (`JAVA_OPTS`).

   - `sh -c` allows the `JAVA_OPTS` variable to be evaluated at runtime.


 


To build a Docker image for your Spring Boot project, follow these steps:


 Prerequisites

1. Docker installed on your machine.

2. A built Spring Boot JAR file in your `target` directory (e.g., `target/demo-0.0.1-SNAPSHOT.jar`).

3. A Dockerfile in the root directory of your project (see the previous Dockerfile example).


 Step-by-Step Instructions


1. Navigate to the Root Directory of Your Project

   Open a terminal and go to the root directory where your `Dockerfile` is located:


   ```bash

   cd /path/to/your/project

   ```


2. Build the Spring Boot JAR

   Make sure that the Spring Boot JAR file is available in the `target` directory. If not, build it using Maven:


   ```bash

   mvn clean package

   ```


   After running this command, a JAR file will be created in the `target` folder (e.g., `target/demo-0.0.1-SNAPSHOT.jar`).


3. Build the Docker Image

   Use the `docker build` command to build the Docker image:


   ```bash

   docker build -t springboot-app .

   ```


   - `-t springboot-app`: The `-t` flag is used to name the image. Here, `springboot-app` is the name of your Docker image.

   - `.`: The period (`.`) at the end specifies the current directory as the build context, where the Dockerfile is located.


4. Verify the Docker Image

   After the build is complete, verify that the image was created using the `docker images` command:


   ```bash

   docker images

   ```


   You should see an entry similar to the following:


   ```

   REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

   springboot-app      latest              123abc456def        5 minutes ago       500MB

   ```


5. Run the Docker Container

   Once the Docker image is built, you can run a container using the `docker run` command:


   ```bash

   docker run -p 8080:8080 springboot-app

   ```


   - `-p 8080:8080`: Maps port 8080 on your local machine to port 8080 in the Docker container.

   - `springboot-app`: The name of the Docker image you built.


6. Access Your Spring Boot Application

   Open a web browser and navigate to:


   ```

   http://localhost:8080

   ```


   You should see your Spring Boot application running!


 Additional Tips


- Tagging the Image with Versions: You can tag the image with a specific version using `:version`:


  ```bash

  docker build -t springboot-app:v1.0 .

  ```


- Running with Environment Variables: You can pass environment variables to the container using the `-e` flag:


  ```bash

  docker run -p 8080:8080 -e "SPRING_PROFILES_ACTIVE=prod" springboot-app

  ```


- Running the Container in Detached Mode: Use the `-d` flag to run the container in detached mode:


  ```bash

  docker run -d -p 8080:8080 springboot-app

  ```

Wednesday, August 21, 2024

AWS Glue and Machine Learning to Encrypt PII Data

 

Key Points:

  1. Download S3 File: The download_s3_file function reads the file from S3 into a pandas DataFrame.
  2. Encryption: The encrypt_data function encrypts SSN and credit card information using the KMS key.
  3. Processing: The process_and_encrypt_pii function applies encryption and removes sensitive fields.
  4. Save as Parquet: The save_as_parquet function converts the DataFrame to a Parquet file.
  5. Upload to S3: The upload_parquet_to_s3 function uploads the Parquet file back to S3.
  6. ML Model Loading and Prediction:
    1. The apply_ml_model function loads a pre-trained ML model using joblib and applies it to the DataFrame. The model's prediction is added as a new column to the DataFrame
  7. ML Model Path:
    • The ml_model_path variable specifies the location of your pre-trained ML model (e.g., a .pkl file).

Prerequisites:

  • You need to have a pre-trained ML model saved as a .pkl file. The model should be trained and serialized using a library like scikit-learn.
  • Make sure the feature set used by the ML model is compatible with the DataFrame after encryption.

import boto3
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
from botocore.exceptions import ClientError
from cryptography.fernet import Fernet
import base64
import io
from sklearn.externals import joblib  # for loading the ML model

# Initialize the AWS services
s3 = boto3.client('s3')
kms = boto3.client('kms')

def download_s3_file(bucket_name, file_key):
    """Download file from S3 and return its contents as a pandas DataFrame."""
    try:
        obj = s3.get_object(Bucket=bucket_name, Key=file_key)
        df = pd.read_csv(io.BytesIO(obj['Body'].read()))  # Assuming the file is in CSV format
        return df
    except ClientError as e:
        print(f"Error downloading file from S3: {e}")
        raise

def encrypt_data(kms_key_id, data):
    """Encrypt data using AWS KMS."""
    response = kms.encrypt(KeyId=kms_key_id, Plaintext=data.encode())
    encrypted_data = base64.b64encode(response['CiphertextBlob']).decode('utf-8')
    return encrypted_data

def process_and_encrypt_pii(df, kms_key_id):
    """Encrypt SSN and credit card information in the DataFrame."""
    df['encrypted_ssn'] = df['ssn'].apply(lambda x: encrypt_data(kms_key_id, x))
    df['encrypted_credit_card'] = df['credit_card'].apply(lambda x: encrypt_data(kms_key_id, x))

    # Drop original sensitive columns
    df = df.drop(columns=['ssn', 'credit_card'])
    return df

def apply_ml_model(df, model_path):
    """Apply a pre-trained ML model to the DataFrame."""
    # Load the ML model (assuming it's a scikit-learn model saved with joblib)
    model = joblib.load(model_path)
    
    # Assuming the model predicts a column called 'prediction'
    features = df.drop(columns=['encrypted_ssn', 'encrypted_credit_card'])  # Adjust based on your feature set
    df['prediction'] = model.predict(features)
    
    return df

def save_as_parquet(df, output_file_path):
    """Save the DataFrame as a Parquet file."""
    table = pa.Table.from_pandas(df)
    pq.write_table(table, output_file_path)

def upload_parquet_to_s3(bucket_name, output_file_key, file_path):
    """Upload the Parquet file to an S3 bucket."""
    try:
        s3.upload_file(file_path, bucket_name, output_file_key)
        print(f"Successfully uploaded Parquet file to s3://{bucket_name}/{output_file_key}")
    except ClientError as e:
        print(f"Error uploading Parquet file to S3: {e}")
        raise

def main():
    # S3 bucket and file details
    input_bucket = 'your-input-bucket-name'
    input_file_key = 'path/to/your/input-file.csv'
    output_bucket = 'your-output-bucket-name'
    output_file_key = 'path/to/your/output-file.parquet'
    
    # KMS key ID
    kms_key_id = 'your-kms-key-id'

    # ML model path
    ml_model_path = 'path/to/your/ml-model.pkl'
    
    # Local output file path
    local_output_file = '/tmp/output-file.parquet'

    # Download the file from S3
    df = download_s3_file(input_bucket, input_file_key)

    # Encrypt sensitive information
    encrypted_df = process_and_encrypt_pii(df, kms_key_id)

    # Apply the ML model
    final_df = apply_ml_model(encrypted_df, ml_model_path)

    # Save the DataFrame as a Parquet file
    save_as_parquet(final_df, local_output_file)

    # Upload the Parquet file back to S3
    upload_parquet_to_s3(output_bucket, output_file_key, local_output_file)

if __name__ == "__main__":
    main()



Thursday, May 2, 2024

Sentiment Analysis using NLP - Java SDK for Amazon Bedrock/Amazon Sagemaker

Sentiment analysis is a natural language processing (NLP) technique used to determine the sentiment or emotional tone expressed in a piece of text. It involves analyzing text data to classify it into categories such as positive, negative, or neutral sentiments.


Here's a basic overview of how sentiment analysis using NLP works:


1. Text Preprocessing: The text data is preprocessed to remove noise, such as special characters, punctuation, and stopwords (commonly occurring words like "the", "is", "and", etc.). Additionally, text may be converted to lowercase for consistency.


2. Feature Extraction: Features are extracted from the preprocessed text data. These features could be individual words (unigrams), combinations of words (bigrams, trigrams), or other linguistic features.


3. Sentiment Classification: Machine learning models, such as classification algorithms like Support Vector Machines (SVM), Naive Bayes, or deep learning models like Recurrent Neural Networks (RNNs) or Transformers, are trained using labeled data. Labeled data consists of text samples along with their corresponding sentiment labels (positive, negative, or neutral).


4. Model Training: The extracted features are used to train the sentiment analysis model. During training, the model learns to recognize patterns in the text data that are indicative of specific sentiments.


5. Model Evaluation: The trained model is evaluated using a separate set of labeled data (validation or test set) to assess its performance in classifying sentiments accurately. Evaluation metrics such as accuracy, precision, recall, and F1-score are commonly used to measure the model's effectiveness.


6. Inference: Once the model is trained and evaluated, it can be used to perform sentiment analysis on new, unseen text data. The model predicts the sentiment of each text sample, classifying it as positive, negative, or neutral.


Sentiment analysis has various applications across different domains, including:


- Customer feedback analysis: Analyzing customer reviews, comments, or social media posts to understand customer sentiment towards products or services.

- Brand monitoring: Monitoring online mentions and discussions to gauge public sentiment towards a brand or organization.

- Market research: Analyzing sentiment in news articles, blogs, or social media discussions to assess market trends and consumer preferences.

- Voice of the customer (VoC) analysis: Extracting insights from customer surveys or feedback forms to identify areas for improvement and measure customer satisfaction.


Overall, sentiment analysis using NLP enables businesses and organizations to gain valuable insights from text data, helping them make data-driven decisions and enhance customer experiences.


To utilize AWS Bedrock for NLP (Natural Language Processing) in Java, you can use the AWS SDK for Java. Below is a basic example code snippet demonstrating how to use AWS Bedrock APIs for NLP tasks like sentiment analysis:


import com.amazonaws.auth.AWSStaticCredentialsProvider; import com.amazonaws.auth.BasicAWSCredentials; import com.amazonaws.client.builder.AwsClientBuilder; import com.amazonaws.services.sagemaker.AmazonSageMaker; import com.amazonaws.services.sagemaker.AmazonSageMakerClientBuilder; import com.amazonaws.services.sagemaker.model.InvokeEndpointRequest; import com.amazonaws.services.sagemaker.model.InvokeEndpointResult; public class BedrockNLPExample { public static void main(String[] args) { // Replace these values with your AWS credentials and SageMaker endpoint String accessKey = "YOUR_ACCESS_KEY"; String secretKey = "YOUR_SECRET_KEY"; String endpointUrl = "YOUR_SAGEMAKER_ENDPOINT_URL"; // Initialize AWS credentials BasicAWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey); // Create an instance of SageMaker client AmazonSageMaker sageMakerClient = AmazonSageMakerClientBuilder.standard() .withCredentials(new AWSStaticCredentialsProvider(awsCredentials)) .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpointUrl, "us-west-2")) // Change region if needed .build(); // Sample text for sentiment analysis String text = "I love using AWS services."; // Invoke endpoint for sentiment analysis InvokeEndpointRequest request = new InvokeEndpointRequest() .withEndpointName("YOUR_SAGEMAKER_ENDPOINT_NAME") // Replace with your SageMaker endpoint name .withContentType("text/csv") .withBody(text); InvokeEndpointResult result = sageMakerClient.invokeEndpoint(request); // Process the result String responseBody = new String(result.getBody().array()); System.out.println("Sentiment Analysis Result: " + responseBody); } }

This code assumes you have already set up an endpoint in AWS SageMaker for NLP tasks, such as sentiment analysis. It sends a request to the SageMaker endpoint with the text to analyze and prints the result. Ensure that you have necessary permissions and that your SageMaker endpoint is properly configured to handle the request.



Thursday, March 14, 2024

OCI Knowledge Series: OCI Infrastructure components

 Oracle Cloud Infrastructure (OCI) provides a comprehensive set of infrastructure services that enable you to build and run a wide range of applications in a highly available, secure, and scalable environment. Below are the various components of OCI infrastructure:


These are some of the key components of the Oracle Cloud Infrastructure (OCI) that enable you to build and manage your cloud infrastructure and application

1. Regions: A region is a localized geographic area composed of one or more availability domains. Regions are isolated from each other, and they are independent of each other in terms of fault tolerance and availability. Each region contains multiple data centers called availability domains. 2. Availability Domains (AD): An availability domain is a standalone, independent data center within a region. Availability domains are isolated from each other, with their own power, cooling, and networking infrastructure. This isolation enhances fault tolerance and availability. OCI services deployed within a region are designed to be resilient to failures within an availability domain. 3. Virtual Cloud Network (VCN): A VCN is a customizable, private network within OCI where you can launch your compute instances, block storage, and other resources. It is logically isolated from other virtual networks in the OCI environment, providing you with control over your network settings, such as IP addressing, route tables, and gateways. 4. Subnets: Subnets are subdivisions of a VCN and represent segmented portions of your network. You can divide a VCN into one or more subnets to host different types of resources. Subnets can be public or private, depending on whether they have internet connectivity. 5. Compute Instances: Compute instances, also known as virtual machines (VMs), are virtualized computing environments where you can run your applications. OCI offers various types of compute instances, including generalpurpose, highperformance, and GPU instances, suited for different workload requirements. 6. Block Storage: OCI provides block storage services for storing persistent data. Block volumes can be attached to compute instances as additional disks to provide scalable and highperformance storage. 7. Object Storage: OCI Object Storage is a highly scalable and durable storage service for storing unstructured data, such as documents, images, and videos. It provides a costeffective solution for storing and retrieving large amounts of data. 8. Networking Services: OCI offers a variety of networking services, including load balancers, DNS, VPN, and FastConnect, to enable secure and efficient communication between resources within your VCN and with external networks. 9. Database Services: OCI provides fully managed database services, including Oracle Autonomous Database, MySQL, and NoSQL Database, to support different types of database workloads. 10. Identity and Access Management (IAM): IAM is a centralized service for managing user access and permissions in OCI. It enables you to define and enforce security policies, roles, and permissions to control who can access which resources and perform specific actions. 11. Security Services: OCI offers a range of security services, such as Web Application Firewall (WAF), Key Management, and Security Zones, to protect your applications and data from security threats. 12. Monitoring and Management Tools: OCI provides monitoring, logging, and management tools, including OCI Monitoring, Logging, and Resource Manager, to help you monitor, troubleshoot, and manage your resources effectively. These are some of the key components of the Oracle Cloud Infrastructure (OCI) that enable you to build and manage your cloud infrastructure and application

OCI (Oracle Cloud Infrastructure) SDK to provision VCN, Subnet and establish VPN connectivity

 Oracle Cloud Infrastructure (OCI) Virtual Cloud Network (VCN) is the networking layer of the Oracle Cloud Infrastructure, equivalent to the Virtual Private Cloud (VPC) in other cloud providers. A VCN allows you to set up a customizable and private network in Oracle’s cloud. You can control the VCN’s IP address range, create subnets, and configure route tables and gateways to manage traffic within or outside the VCN.

  1. Private and Isolated Network: A VCN provides an isolated network within the Oracle Cloud Infrastructure.
  2. Customizable: You can set the IP CIDR block, create subnets, and use Network Security Groups or Security Lists to control inbound and outbound traffic.
  3. Route Tables: Define how the traffic is routed within your VCN or to the internet.
  4. Internet Gateway: Allows traffic to flow between your VCN and the internet.
  5. NAT Gateway: Allows instances in a private subnet to initiate outbound connections to the internet without exposing their IP addresses.
  6. VPN Gateway: For secure, encrypted communication between your on-premise network and your VCN.
  7. Load Balancer: Distributes incoming traffic across multiple targets to ensure high availability.
  8. Service Gateway: Provides a path for private traffic between your VCN and supported Oracle services.


This code snippet creates a VCN, subnet, security list, and VPN using the OCI Java SDK, utilizing the Identity service client and the respective create methods. Make sure to handle exceptions appropriately in your production code.


Make sure to replace "your_compartment_id", "YourVCN", "YourSubnet", "YourSecurityList", and "YourVPN" with appropriate values for your Oracle Cloud tenancy, Virtual Cloud Network (VCN), subnet, security list, and VPN display names respectively.

Ensure that your OCI configuration file (typically found at ~/.oci/config) is properly configured with your user credentials and the correct region.


import com.oracle.bmc.identity.IdentityClient;

import com.oracle.bmc.identity.model.CreateVpnDetails;

import com.oracle.bmc.identity.model.CreateSubnetDetails;

import com.oracle.bmc.identity.model.CreateSecurityListDetails;

import com.oracle.bmc.identity.model.CreateSecurityRuleDetails;

import com.oracle.bmc.identity.requests.CreateVpnRequest;

import com.oracle.bmc.identity.requests.CreateSubnetRequest;

import com.oracle.bmc.identity.requests.CreateSecurityListRequest;

import com.oracle.bmc.identity.responses.CreateVpnResponse;

import com.oracle.bmc.identity.responses.CreateSubnetResponse;

import com.oracle.bmc.identity.responses.CreateSecurityListResponse;

import com.oracle.bmc.Region;

import com.oracle.bmc.auth.AuthenticationDetailsProvider;

import com.oracle.bmc.auth.ConfigFileAuthenticationDetailsProvider;

import com.oracle.bmc.model.BmcException;


import java.util.Collections;


public class InfrastructureProvisioning {

    public static void main(String[] args) {

        String compartmentId = "your_compartment_id";

        String vcnDisplayName = "YourVCN";

        String subnetDisplayName = "YourSubnet";

        String securityListDisplayName = "YourSecurityList";

        String vpnDisplayName = "YourVPN";


        // Path to your OCI configuration file

        String configurationFilePath = "~/.oci/config";


        // Get the authentication details from the OCI configuration file

        AuthenticationDetailsProvider provider =

                new ConfigFileAuthenticationDetailsProvider(configurationFilePath, "DEFAULT");


        IdentityClient identityClient = new IdentityClient(provider);

        identityClient.setRegion(Region.US_PHOENIX_1); // Change to appropriate region


        try {

            // Create VCN

            CreateVcnDetails createVcnDetails = CreateVcnDetails.builder()

                    .cidrBlock("10.0.0.0/16")

                    .compartmentId(compartmentId)

                    .displayName(vcnDisplayName)

                    .build();


            CreateVcnRequest createVcnRequest = CreateVcnRequest.builder()

                    .createVcnDetails(createVcnDetails)

                    .build();


            Vcn vcn = identityClient.createVcn(createVcnRequest).getVcn();


            // Create Subnet

            CreateSubnetDetails createSubnetDetails = CreateSubnetDetails.builder()

                    .cidrBlock("10.0.0.0/24")

                    .compartmentId(compartmentId)

                    .displayName(subnetDisplayName)

                    .vcnId(vcn.getId())

                    .build();


            CreateSubnetRequest createSubnetRequest = CreateSubnetRequest.builder()

                    .createSubnetDetails(createSubnetDetails)

                    .build();


            Subnet subnet = identityClient.createSubnet(createSubnetRequest).getSubnet();


            // Create Security List

            CreateSecurityRuleDetails createSecurityRuleDetails = CreateSecurityRuleDetails.builder()

                    .direction(CreateSecurityRuleDetails.Direction.Egress)

                    .destination("0.0.0.0/0")

                    .protocol("all")

                    .build();


            CreateSecurityListDetails createSecurityListDetails = CreateSecurityListDetails.builder()

                    .compartmentId(compartmentId)

                    .displayName(securityListDisplayName)

                    .egressSecurityRules(Collections.singletonList(createSecurityRuleDetails))

                    .ingressSecurityRules(Collections.singletonList(createSecurityRuleDetails))

                    .vcnId(vcn.getId())

                    .build();


            CreateSecurityListRequest createSecurityListRequest = CreateSecurityListRequest.builder()

                    .createSecurityListDetails(createSecurityListDetails)

                    .build();


            SecurityList securityList = identityClient.createSecurityList(createSecurityListRequest).getSecurityList();


            // Create VPN

            CreateVpnDetails createVpnDetails = CreateVpnDetails.builder()

                    .compartmentId(compartmentId)

                    .displayName(vpnDisplayName)

                    .vcnId(vcn.getId())

                    .build();


            CreateVpnRequest createVpnRequest = CreateVpnRequest.builder()

                    .createVpnDetails(createVpnDetails)

                    .build();


            Vpn vpn = identityClient.createVpn(createVpnRequest).getVpn();


            System.out.println("VPN Created: " + vpn.getId());

            System.out.println("Subnet Created: " + subnet.getId());

            System.out.println("Security List Created: " + securityList.getId());

        } catch (BmcException e) {

            System.out.println("Error: " + e.getMessage());

        } finally {

            identityClient.close();

        }

    }

}


Amazon Bedrock and AWS Rekognition comparison for Image Recognition

 Both Amazon Bedrock and AWS Rekognition are services provided by AWS, but they cater to different use cases, especially when it comes to ...