Monday, May 22, 2017

Oracle SOA Cloud: Deploy MDS Artifacts on SOA Cloud Instance using cloud Enterprise Manager

This BLOG thread discusses the steps on deployment of MDS artifacts on SOA cloud instance.

Setup JDeveloper SOA_DesignTimeRepository

JDeveloper 12.2.1 or 12.2.1.2 by default creates the SOA_DesignTimeRepository. This is a file based repository. At MindTelligent, we link this directory to SVN or GIT master repository. It is imperative that we have a main folder /apps and all the artifacts are stored under the apps folder.

On looking at the Design Time Repository closely, the structure looks like as shown below:
















To change the location of the directory, simply right click on the  SOA_DesignTimeRepository, and click on Properties.









  
















Please click on the Browse button and choose the folder you wish to select as File based Repository
Please note that you do not select the /apps folder under which all the artifacts are located.


































That is it for JDeveloper Setup. You are ready to build your composites.


Deploy the MDS artifacts on SOA cloud


  • Log in to your SOA Cloud Instance 
  • Navigate to the "SOA Fusion Middleware Control Console"























  • Navigate to SOA Infrastructure-->Administration-->MDS configuration











































Choose the option to Import the MDS



































Please navigate to the folder where the Zip file for the SOA_DesignTimeRepository is located. 

The zip file SHOULD include the /apps folder












EM will display the following message if all the artifacts are successfully uploaded to the  MDS:



















Friday, May 5, 2017

Installation of Apache Spark on Windows 10

Apache Spark is an open-source cluster-computing framework. Originally developed at the University of California, Berkeley's AMPLab, the Spark codebase was later donated to the Apache Software Foundation, which has maintained it since. Spark provides an interface for programming entire clusters with implicit data parallelism and fault-tolerance.

Please follow following instructions on installation Apache Spark on Windows 10.

Prerequisites:

Please ensure that you have installed JDK 1.8 or above on your environment.

Steps:

Installation of Scala 2.12.2
  • Please Install Scala after downloading it. 
  • Scala can be downloaded from here.
  • Download will give you a .msi file. Follow instructions and install Scala






















Installation of Spark


  • Spark Can be downloaded from here
  • I am choosing version 2.1.1 prebuit for Hadoop. Please note, I shall be running this without Hadoop.






















  • Extract the tar file into a folder called c:\Spark
  • The contents of the Extract will look like





Download Winutils


  • Download Winutils from these links : 64 bits
  • Create a folder c:\Spark\Winutils\bin and copy this winutils.exe there
  • The folder structure will look like


















Setup Environment Variables


  • Following environment variables will need to be setup:
    • JAVA_HOME: C:\jdk1.8.0_91
    • SCALA_HOME: C:\Program Files (x86)\scala\bin
    • _JAVA_OPTION: -Xms128m -Xmx256m
    • HADOOP_HOME:  C:\Spark\WinUtils
    • SPARK_HOME: C:\Program Files (x86)\scala\bin
  • Create a folder c:\tmp\hive and give it read/write/execute privileges for all
Test Spark Environment

  • Navigate to SPARK_HOME/bin and execute command scala-shell
You should re ready to use Spark






Amazon Bedrock and AWS Rekognition comparison for Image Recognition

 Both Amazon Bedrock and AWS Rekognition are services provided by AWS, but they cater to different use cases, especially when it comes to ...