Thursday, December 24, 2020

OHS the plug-ins do not fail over

 

CAUSE

The multiple invocations had nothing to do with OSB, rather with the configuration of the plugin for the Apache load balancer. This is expected behavior for the default 11g configuration in certain situations. The issue is due to the use of the plugin configuration parameter "Idempotent".

When "Idempotent" is turned on, the plugin will attempt to resend the HTTP request after it has timed out (configured in the "WLIOTimeoutSecs" parameter in plugin).

http://download.oracle.com/docs/cd/E13222_01/wls/docs100/plugins/plugin_params.html

 

Unsolicited multiple invocations like this have been reported in other products using the same load balancer configuration.

SOLUTION

 

  1. Turn OFF Idempotent in the web server plugin configuration.
    http://download.oracle.com/docs/cd/E13222_01/wls/docs100/plugins/plugin_params.html
    If "Idempotent" is set to “OFF” the plugin will not fail over.
  2. If not explicitly set, WLIOTimeoutSecs defaults to 300 seconds (5 minutes).
    You can add a line in the file httpd.conf ($ORACLE_INSTANCE/config/OHS/ohsx) in order to set this parameter WLIOTimeoutSecs

Wednesday, December 23, 2020

OHS Terminating SSL Requests

 

Terminating SSL Requests

The following sections describe how to terminate requests using SSL before or within Oracle HTTP Server, where the mod_wl_ohs module forwards requests to WebLogic Server. Whether you terminate SSL before the request reaches Oracle HTTP Server or when the request is in the server, depends on your topology. A common reason to terminate SSL is for performance considerations when an internal network is otherwise protected with no risk of a third-party intercepting data within the communication. Another reason is when WebLogic Server is not configured to accept HTTPS requests.

This section includes the following topics:

About Terminating SSL at the Load Balancer

If you are using another device such as a load balancer or a reverse proxy which terminates requests using SSL before reaching Oracle HTTP Server, then you must configure the server to treat the requests as if they were received through HTTPS. The server must also be configured to send HTTPS responses back to the client.

Figure 9-1 illustrates an example where the request transmitted from the browser through HTTPS to WebLogic Server. The load balancer terminates SSL and transmits the request as HTTP. Oracle HTTP Server must be configured to treat the request as if it was received through HTTPS.

Figure 9-1 Terminating SSL Before Oracle HTTP Server

Description of Figure 9-1 follows
Description of "Figure 9-1 Terminating SSL Before Oracle HTTP Server"
Terminating SSL at the Load Balancer

To instruct the Oracle HTTP Server to treat requests as if they were received through HTTPS, configure the httpd.conf file with the SimulateHttps directive in the mod_certheaders module.

For more information on mod_certheaders module, see mod_certheaders Module—Enables Reverse Proxies.

Note:

This procedure is not necessary if SSL is configured on Oracle HTTP Server (that is, if you are directly accessing Oracle HTTP Server using HTTPS).

  1. Configure the httpd.conf configuration file with the external name of the server and its port number, for example:
    ServerName <www.company.com:port>
    
  2. Configure the httpd.conf configuration file to load the mod_certheaders module, for example:
    • On UNIX:

      LoadModule certheaders_module libexec/mod_certheaders.so
      
    • On Windows:

      LoadModule certheaders_module modules/ApacheModuleCertHeaders.dll
      AddModule mod_certheaders.c
      

      Note:

      Oracle recommends that the AddModule line should be included with other AddModule directives.

  3. Configure the SimulateHttps directive at the bottom of the httpd.conf file to send HTTPS responses back to the client, for example:
    # For use with other load balancers and front-end devices:
    SimulateHttps On
    
  4. Restart Oracle HTTP Server and test access to the server. Especially, test whether you can access static pages such as https://host:port/index.html

    Test your configuration as a basic setup. If you are having issues, then you should troubleshoot from here to avoid overlapping with other potential issues, such as with virtual hosting.

  5. Ideally, you may want to configure a VirtualHost in the httpd.conf file to handle all HTTPS requests. This separates the HTTPS requests from the HTTP requests as a more scalable approach. This may be more desirable in a multi-purpose site or if a load balancer or other device is in front of Oracle HTTP Server which is also handling both HTTP and HTTPS requests.

    The following sample instructions load the mod_certheaders module, then creates a virtual host to handle only HTTPS requests.

    # Load correct module here or where other LoadModule lines exist:
    LoadModule certheaders_module libexec/mod_certheaders.so
    # This only handles https requests:
       <VirtualHost <name>:<port>
           # Use name and port used in url:
           ServerName <www.company.com:port>
           SimulateHttps On
           # The rest of your desired configuration for this VirtualHost goes here
       </VirtualHost>
    
  6. Restart Oracle HTTP Server and test access to the server, First test a static page such as https://host:port/index.html and then your test your application.

About Terminating SSL at Oracle HTTP Server

If SSL is configured in Oracle HTTP Server but not on Oracle WebLogic Server, then you can terminate SSL for requests sent by Oracle HTTP Server.

The following figures illustrate request flows, showing where HTTPS stops. In Figure 9-2, an HTTPS request is sent from the browser. The load balancer transmits the HTTPS request to Oracle HTTP Server. SSL is terminated in Oracle HTTP Server and the HTTP request is sent to WebLogic Server.

Figure 9-2 Terminating SSL at Oracle HTTP Server—With Load Balancer

Description of Figure 9-2 follows
Description of "Figure 9-2 Terminating SSL at Oracle HTTP Server—With Load Balancer"

In Figure 9-3 there is no load balancer and the HTTPS request is sent directly to Oracle HTTP Server. Again, SSL is terminated in Oracle HTTP Server and the HTTP request is sent to WebLogic Server.

Figure 9-3 Terminating SSL at Oracle HTTP Server—Without Load Balancer

Description of Figure 9-3 follows
Description of "Figure 9-3 Terminating SSL at Oracle HTTP Server—Without Load Balancer"
Terminating SSL at Oracle HTTP Server

To instruct the Oracle HTTP Server to treat requests as if they were received through HTTPS, configure the WLSProxySSL directive in the mod_wl_ohs.conf file and ensure that the SecureProxy directive is not configured.

  1. Configure the mod_wl_ohs.conf file to add the WLSProxySSL directive for the location of your non-SSL configured managed servers.
    For example:
    WLProxySSL ON
    
  2. If using a load balancer or other device in front of Oracle HTTP Server (which is also using SSL), you might need to configure the WLProxySSLPassThrough directive instead, depending on if it already sets WL-Proxy-SSL.
    For example:
    WLProxySSLPassThrough ON
    

    For more information, see your load balancer documentation. For more information on WLProxySSLPassThrough, see Parameters for Oracle WebLogic Server Proxy Plug-Ins in Using Oracle WebLogic Server Proxy Plug-Ins.

  3. Ensure that the SecureProxy directive is not configured, as it will interfere with the intended communication between the components.
    This directive is to be used only when SSL is used throughout. The SecureProxy directive is commented out in the following example:
    # To configure SSL throughout (all the way to WLS):
    # SecureProxy ON
    # WLSSLWallet  "<Path to Wallet>" 
    
  4. Enable the WebLogic Plug-In flag for your managed servers or cluster.
    By default, this option is not enabled. Complete the following steps to enable the WebLogic Plug-In flag:
    1. Log in to the Oracle WebLogic Server Administration Console.
    2. In the Domain Structure pane, expand the Environment node.
    3. Click on Clusters.
    4. Select the cluster to which you want to proxy requests from Oracle HTTP Server.
      The Configuration: General tab appears.
    5. Scroll down to the Advanced section, expand it.
    6. Click Lock and Edit.
    7. Set the WebLogic Plug-In Enabled to yes.
    8. Click Save and Activate the Changes.
    9. Restart the servers for the changes to be effective.
  5. Restart Oracle HTTP Server and test access to a Java application.
    For example: https://host:port/path/application_name.

Tuesday, November 24, 2020

How to run .SQL script using JDBC?

 

A database script file is a file that contains multiple SQL quries separated from each other. Usually, these files have the .sql extention.

Running .sql script files in Java

You can execute .sql script files in Java using the runScript() method of the ScriptRunner class of Apache iBatis. To this method you need to pass a connection object.

Therefore to run a script file −

  • Register the MySQL JDBC Driver using the registerDriver() method of the DriverManager class.
  • Create a connection object to establish connection with the MySQL database using the getConnection() method.
  • Initialize the ScriptRunner class of the package org.apache.ibatis.jdbc.
  • Create a Reader object to read the script file.
  • Finally, execute the script using the runScript(reader) method.

Example

Let us create a script file with name sampleScript.sql copy the following contents init. This script creates a table with name cricketers_data in MySQL database an populates it with five records.

CREATE DATABASE exampleDB;
use exampleDB;
CREATE TABLE exampleDB.cricketers_data(
   First_Name VARCHAR(255),
   Last_Name VARCHAR(255),
   Date_Of_Birth date,
   Place_Of_Birth VARCHAR(255),
   Country VARCHAR(255)
);
insert into cricketers_data values('Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India');
insert into cricketers_data values('Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica');
insert into cricketers_data values('Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka');
insert into cricketers_data values('Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India');
insert into cricketers_data values('Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India');
select * from mydatabase.cricketers_data;

Add the following maven dependency (for the jar file mybatis-3.4.1.jar) to your pom.xml file −

<dependency>
   <groupId>org.mybatis</groupId>
   <artifactId>mybatis</artifactId>
   <version>3.4.5</version>
</dependency>

Example

Following JDBC program executes the sampleScript.sql file.

import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.Reader;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import org.apache.ibatis.jdbc.ScriptRunner;
public class RunningScripts {
   public static void main(String args[]) throws Exception {
      //Registering the Driver
      DriverManager.registerDriver(new com.mysql.jdbc.Driver());
      //Getting the connection
      String mysqlUrl = "jdbc:mysql://localhost/talakai_noppi";
      Connection con = DriverManager.getConnection(mysqlUrl, "root", "password");
      System.out.println("Connection established......");
      //Initialize the script runner
      ScriptRunner sr = new ScriptRunner(con);
      //Creating a reader object
      Reader reader = new BufferedReader(new FileReader("E:\\sampleScript.sql"));
      //Running the script
      sr.runScript(reader);
   }
}

Output

Connection established......
CREATE DATABASE exampleDB
use exampleDB
CREATE TABLE exampleDB.cricketers_data(
   First_Name VARCHAR(255),
   Last_Name VARCHAR(255),
   Date_Of_Birth date,
   Place_Of_Birth VARCHAR(255),
   Country VARCHAR(255)
)
insert into cricketers_data values('Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India')
insert into cricketers_data values('Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica')
insert into cricketers_data values('Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka')
insert into cricketers_data values('Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India')
insert into cricketers_data values('Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India')
select * from mydatabase.cricketers_data
First_Name Last_Name Year_Of_Birth Place_Of_Birth Country
Shikhar Dhawan 1981-12-05 Delhi India
Jonathan Trott 1981-04-22 CapeTown SouthAfrica
Lumara Sangakkara 1977-10-27 Matale Srilanka
Virat Kohli 1988-11-05 Delhi India
Rohit Sharma 1987-04-30 Nagpur India
 Reference: https://www.tutorialspoint.com/how-to-run-sql-script-using-jdbc 

Wednesday, November 4, 2020

How to get a list of images on docker registry v2

 List all repositories (effectively images):

curl -X GET https://myregistry:5000/v2/_catalog
> {"repositories":["redis","ubuntu"]}

List all tags for a repository:

curl -X GET https://myregistry:5000/v2/ubuntu/tags/list
> {"name":"ubuntu","tags":["14.04"]}

Saturday, October 31, 2020

Install letsencrypt cer to trust CA in ubuntu

Install ca-certificates

sudo apt-get install ca-certificates

Down load cer from letsencrypt

cd /usr/share/ca-certificates
sudo wget https://letsencrypt.org/certs/isrgrootx1.pem  -O isrgrootx1.crt
sudo wget https://letsencrypt.org/certs/letsencryptauthorityx3.pem  -O letsencryptauthorityx3.crt

Update CA

sudo dpkg-reconfigure ca-certificates

Wednesday, October 14, 2020

Endpoint is not Created for Service in Kubernetes

The Problem

Endpoints shows ‘none’:

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.yy.0.1 <none> 443/TCP 9d
test ClusterIP 10.xx.97.97 <none> 6379/TCP 21s
$ kubectl describe svc test
Name: test
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"test","namespace":"default"},"spec":{"clusterIP":"10.xx.97.97","...
Selector: app=test
Type: ClusterIP
IP: 10.xx.97.97
Port: <unset> 6379/TCP
TargetPort: 6379/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>

The Solution

The service selector doesn’t match any Pod’s labels.

$ kubectl get pods --show-labels |egrep 'app=test'
$

1. Edit the yaml file and correct the selector to match the Pod’s label.

$ kubectl get pods --show-labels |egrep 'app=filebeat'
myapp-ds-c2fwm 1/1 Running 0 21h app=filebeat,controller-revision-hash=54ccfc87bd,pod-template-generation=1,release=stable
myapp-ds-rbn4z 1/1 Running 0 21h app=filebeat,controller-revision-hash=54ccfc87bd,pod-template-generation=1,release=stable
$ vi test-svc.yaml
apiVersion: v1
apiVersion: v1
kind: Service
metadata:
name: test
namespace: default
spec:
selector:
app: filebeat
clusterIP: 10.xx.97.97
type: ClusterIP
ports:
- port: 6379
targetPort: 6379

2. Apply the configuration:

$ kubectl apply -f test-svc.yaml
service/test created
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.yy.0.1  443/TCP 9d
test ClusterIP 10.xx.97.97  6379/TCP 29m

3. Show the details of the service:

$ kubectl describe svc test
Name: test
Namespace: default
Labels: [none]
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"test","namespace":"default"},"spec":{"clusterIP":"10.xx.97.97","...
Selector: app=filebeat
Type: ClusterIP
IP: 10.xx.97.97
Port: [unset] 6379/TCP
TargetPort: 6379/TCP
Endpoints: 10.zzz.1.38:6379,10.zzz.2.36:6379
Session Affinity: None
Events: [none]
$ kubectl get endpoints test
NAME ENDPOINTS AGE
test 10.zzz.1.38:6379,10.zzz.2.36:6379 39m

 Ref: https://www.thegeekdiary.com/endpoint-is-not-created-for-service-in-kubernetes/

Tuesday, September 8, 2020

Run Docker Container as a Service

 Ref: https://www.jetbrains.com/help/youtrack/standalone/run-docker-container-as-service.html


Docker team recommends to use cross-platform built-in restart policy for running container as a service. For this, configure your docker service to start on system boot and simply add parameter --restart unless-stopped to the docker run command that starts YouTrack.


However, when it comes to the sequential start of several services (including YouTrack), the restart policy method will not suit. You can use a process manager instead.


Here's an example of how to run YouTrack container as a service on Linux with help of systemd.


To run YouTrack container as a service on Linux with systemd:

Create a service descriptor file /etc/systemd/system/docker.youtrack.service:

[Unit]
Description=YouTrack Service
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker exec %n stop
ExecStartPre=-/usr/bin/docker rm %n
ExecStartPre=/usr/bin/docker pull jetbrains/youtrack:<version>
ExecStart=/usr/bin/docker run --rm --name %n \
    -v <path to data directory>:/opt/youtrack/data \
    -v <path to conf directory>:/opt/youtrack/conf \
    -v <path to logs directory>:/opt/youtrack/logs \
    -v <path to backups directory>:/opt/youtrack/backups \
    -p <port on host>:8080 \
    jetbrains/youtrack:<version>
[Install]
WantedBy=default.target


Enable starting the service on system boot with the following command:


systemctl enable docker.youtrack


You can also stop and start the service manually at any moment with the following commands, respectively:


sudo service docker.youtrack stop

sudo service docker.youtrack start


Friday, August 28, 2020

Docker insecure registry

 Edit file /usr/lib/systemd/system/docker.service

add --insecure-registry=myregistrydomain.com:5000

to line 

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock


reload and restart docker service

systemctl daemon-reload

service docker restart

Wednesday, August 5, 2020

T24 Error: No component defined. $PACKAGE is mandatory !

To fix the issue add this line to tafj.properties( in $TAFJ_HOME/conf folder)
 temn.tafj.compiler.component.strict.mode=false 

Tuesday, July 21, 2020

Import DUMP file in Oracle


  1. Create user and grant permission
    alter session set "_ORACLE_SCRIPT"=true;

    create user [username] identified by [password];
    grant connect, create session, imp_full_database to [username];
    CREATE SMALLFILE TABLESPACE [TABLESPACE_NAME] DATAFILE 'FILEDATA.dbf' SIZE 7G AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED LOGGING EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;
    GRANT UNLIMITED TABLESPACE TO  [username];
  2. Create Directory
    CREATE DIRECTORY BACKUP_DIR AS '/home/oracle/import';
    GRANT read,write on DIRECTORY BACKUP_DIR to   [username];
  3. Import
     impdp [username]/[password] DIRECTORY=BACKUP_DIR  DUMPFILE=File.dmp FULL=Y LOGFILE=import.log

Sunday, June 21, 2020

WSO2 Micro Integrator- Remove Request Headers From Response

Add the name of the header to be removed as a property property

<property name="<name of the header to be removed>" scope="transport" action="remove"/>

Note : The above method removes only the specified headers from the response. If you need to remove all the headers, follow the instructions below.
Add the TRANSPORT_HEADERS property

<property name="TRANSPORT_HEADERS" action="remove" scope="axis2"/>

WSO2 Micro Integrator - Enable Jms transport

Edit file [MI_HOME]/conf/deployment.toml, add below lines:
[[transport.jms.listener]]
name = "myQueueListener"
parameter.initial_naming_factory = "com.ibm.mq.jms.context.WMQInitialContextFactory"
parameter.broker_name = "IBM MQ"
parameter.provider_url = "X.X.X.X:1416/Channel"
parameter.connection_factory_name = "connection_factory_name "
parameter.connection_factory_type = "queue"



[[transport.jms.sender]]
name = "myQueueSender"
parameter.initial_naming_factory = "com.ibm.mq.jms.context.WMQInitialContextFactory"
parameter.broker_name = "IBM MQ"
parameter.provider_url = "X.X.X.X:1416/Channel"
parameter.connection_factory_name = "connection_factory_name "
parameter.connection_factory_type = "queue"


Copying IBM Websphere MQ libraries

These instructions are tested on IBM WebSphere MQ version 8.0.0.4. However, you can follow them for other versions appropriately.
  • Create a new directory named wmq-client , and then create another new directory named lib inside it.
  • Copy the following JAR files from the <IBM_MQ_HOME>/java/lib/ directory (where <IBM_MQ_HOME> refers to the IBM WebSphere MQ installation directory) to the wmq-client/lib/ directory.
com.ibm.mq.allclient.jar
mqcontext.jar
jms.jar
providerutil.jar
  • Create a POM.xml file inside the wmq -client/ directory and add all the required dependencies as shown in the example below.
<?xml version="1.0"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>wmq-client</groupId>
<artifactId>wmq-client</artifactId>
<version>8.0.0.4</version>
<packaging>bundle</packaging>
<dependencies>
    <dependency>
        <groupId>com.ibm</groupId>
        <artifactId>fscontext</artifactId>
        <version>8.0.0.4</version>
        <scope>system</scope>
        <systemPath>${basedir}/lib/mqcontext.jar</systemPath>
    </dependency>
    <dependency>
        <groupId>com.ibm</groupId>
        <artifactId>providerutil</artifactId>
        <version>8.0.0.4</version>
        <scope>system</scope>
        <systemPath>${basedir}/lib/providerutil.jar</systemPath>
    </dependency>
    <dependency>
        <groupId>com.ibm</groupId>
        <artifactId>allclient</artifactId>
        <version>8.0.0.4</version>
        <scope>system</scope>
        <systemPath>${basedir}/lib/com.ibm.mq.allclient.jar</systemPath>
    </dependency>
    <dependency>
        <groupId>javax.jms</groupId>
        <artifactId>jms</artifactId>
        <version>1.1</version>
        <scope>system</scope>
        <systemPath>${basedir}/lib/jms.jar</systemPath>
    </dependency>
</dependencies>
<build>
    <plugins>
        <plugin>
            <groupId>org.apache.felix</groupId>
            <artifactId>maven-bundle-plugin</artifactId>
            <version>2.3.4</version>
            <extensions>true</extensions>
            <configuration>
                <instructions>
                    <Bundle-SymbolicName>${project.artifactId}</Bundle-SymbolicName>
                    <Bundle-Name>${project.artifactId}</Bundle-Name>
                    <Export-Package>*;-split-package:=merge-first</Export-Package>
                    <Private-Package/>
                    <Import-Package/>
                    <Embed-Dependency>*;scope=system;inline=true</Embed-Dependency>
                    <DynamicImport-Package>*</DynamicImport-Package>
                </instructions>
            </configuration>
        </plugin>
    </plugins>
</build>
</project>
  • Navigate to the wmq -client directory using your Command Line Interface (CLI), and execute the following command, to build the project: mvn clean install
  • Stop the WSO2 Micro Integrator, if it is already running.
  • Remove any existing IBM MQ client JAR files from the MI_HOME/dropins directory and the MI_HOME/lib directory.
  • Copy the <wmq-client>/target/wmq-client-8.0.0.4.jar file to the MI_HOME/dropins directory.
  • Download the jta.jar file from the maven repository, and copy it to the MI_HOME/lib directory.
Reference: https://ei.docs.wso2.com/en/7.1.0/micro-integrator/setup/brokers/configure-with-IBM-websphereMQ

Sunday, June 14, 2020

JBOSS 7.3 connect to remote ActiveMQ


  1. Create mododule ActiveMQ
    mkdir -pv $JBOSS_HOME/modules/system/layers/base/org/apache/activemq/main
    cp activemq-all-5.15.9.jar $JBOSS_HOME/modules/system/layers/base/org/apache/activemq/main
    Create module.xml with content below:
    <?xml version="1.0" encoding="UTF-8"?>
    <module xmlns="urn:jboss:module:1.5" name="org.apache.activemq">
       <resources>
            <resource-root path="activemq-all-5.15.9.jar"/>
        </resources>
        <dependencies>
            <module name="javax.api"/>
            <module name="javax.jms.api"/>
        </dependencies>
    </module>
  2. Choose "Configuration -> Naming -> Binding" click Add(external-context)
    Class: javax.naming.InitialContext
    Environment:
    java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
    java.naming.provider.url=tcp://xxxxx:61616
    Module: org.apache.activemq
    Name: java:global/remoteContext
    Binding Type: external-context
  3. Choose "Configuration -> Naming -> Binding" click Add(lookup) for ConnectionFactory, Queue,..
    Binding Type: lookup
    Lookup: java:global/remoteContext/ConnectionFactory

Monday, June 1, 2020

JMS Message as a MQRFH2 Message

MQGetMessageOptions gmo = new MQGetMessageOptions();
gmo.options = CMQC.MQGMO_PROPERTIES_FORCE_MQRFH2 + CMQC.MQGMO_FAIL_IF_QUIESCING + CMQC.MQGMO_NO_WAIT;
MQMessage receiveMsg = new MQMessage();
queue.get(receiveMsg, gmo);
if (CMQC.MQFMT_RF_HEADER_2.equals(receiveMsg.format)){
   receiveMsg.seek(0);
   byte[] b = new byte[receiveMsg.getMessageLength()];
   DataInputStream inputStream =  new DataInputStream(new ByteArrayInputStream(b));
   MQRFH2 rfh2 = new MQRFH2(inputStream );
   int strucLen = rfh2.getStrucLength();
   int encoding = rfh2.getEncoding();
   int CCSID    = rfh2.getCodedCharSetId();
   String format= rfh2.getFormat();
   int flags    = rfh2.getFlags();
   int nameValueCCSID = rfh2.getNameValueCCSID();
   String[] folderStrings = rfh2.getFolderStrings();
   for (String folder : folderStrings)
      System.out.println("Folder: "+folder);
      b = new byte[inputStream.available()];
      inputStream.read(b);
      System.out.println("Data: "+new String(b));
}else if (CMQC.MQFMT_STRING.equals(receiveMsg.format)){
   String msgStr = receiveMsg.readStringOfByteLength(receiveMsg.getMessageLength());
   System.out.println("Data: "+msgStr);
}else{
   byte[] b = new byte[receiveMsg.getMessageLength()];
   receiveMsg.readFully(b);
   System.out.println("Data: "+new String(b));
}

Friday, May 8, 2020

Redhat fuse - JMS for IBM MQ

Drap and drop the JMS componet


In properties of the JMS component, select tab Advance and input the connect  Factory( the connection factory have to begin with # . Ex: #connectionFactory)

Add com.ibm.mq.allclient.jar to classpath( com.ibm.mq.allclient.jar is provided by IBM MQ).

Click to tab Configuration and add 1 bean with same name for Connection Factory( ex: connectionFactory).


The JBOSS EAP must run with  standalone-full.xml profile.

Rehat fuse - JDBC

Drap and drop JDBC component to Design tab
Input URI: jdbc:[datasource] ex: ExampleDS
Click to tab Configuration and add 1 bean with name is datasource name( ex: ExampleDS).

Tuesday, March 10, 2020

IBM Integration Bus - When you write Java™ code for a JavaCompute node, you can include references to other Java projects and JAR files.

To complete this task, you must have completed the following tasks:
  • Add a JavaCompute node to your message flow.
  • Create Java code for a JavaCompute node.
The Java code in a JavaCompute node might contain references to other Java projects in the Eclipse workspace (internal dependencies), or to external JAR files, for example the JavaMail API (external dependencies), or a set of JAXB Java object classes (internal or external). If other JAR files are referenced, you must add the files to the project class path.
See Using JAXB with a JavaCompute node for an example of a project using Java objects generated by the JAXB binding compiler.
  1. Right-click the project folder for the project on which you are working, and click Properties.
  2. Click Java Build Path in the left pane.
  3. Click the Libraries tab.
  4. Complete one of the following steps:
    • To add an internal dependency, click Add JARs, select the JAR file that you want to add, then click OK.
    • To add an external dependency, click Add External JARs, select the JAR file that you want to add, then click Open. Copy the JAR file to the shared-classes directory required. For more details of the shared-classes directories available and the effects of each, see Java shared classloader. If you do not copy the JAR file to a valid shared-classes directory, ClassNotFoundException exceptions are generated at run time.
You have now added a code dependency.

Reference: https://www.ibm.com/support/knowledgecenter/en/SSMKHH_9.0.0/com.ibm.etools.mft.doc/ac30280_.htm

Loads all the JAR files located within the shared-classes directories.

Loads all the JAR files located within the shared-classes directories. The precedence order of loading is dictated by the directories the JAR files are located in.

Determine the integration node workpath to use by running the mqsireportbroker command as follows:

mqsireportbroker integrationNodeName

JAR files are loaded in the following precedence order:
  • For Windows
    • workpath\config\<my_int_node_name>\<my_int_server_label>\shared-classes
  • For Linux, UNIX, and z/OS
    • workpath/config/<my_int_node_name>/<my_int_server_label>/shared-classes
  • For Windows
    • workpath\config\<my_int_node_name>\shared-classes
  • For Linux, UNIX, and z/OS
    • workpath/config/<my_int_node_name>/shared-classes
  • For Windows
    • workpath\shared-classes
  • For Linux, UNIX and z/OS
    • workpath/shared-classes

Sunday, March 1, 2020

IBM Integration Bus - Filter Node

Filter Node
The “Filter” node is the simplest of the routing nodes and provides a simple “If-Then-Else” routing mechanism within a Message Flow. The “Filter” node is an ESQL node. The ESQL executed in this node must terminate with an ESQL RETURN statement. This statement must return a Boolean value. The route taken from this node depends upon the value of the RETURN statement.
This node has four output terminals. These terminals are:

  •  Failure (A failure in the ESQL code)
  • True (ESQL code returns a “True” value)
  • False (ESQL code returns a “False” value)
  • Unknown (ESQL code returns a “NULL” value or does not “RETURN” a boolean)
Note: The “Filter” node propagates an unchanged input message to an output terminal. If you are referring to the input message to interrogate its contents in a Database or Filter node, use correlation name Body to refer to the start of the message. Body is equivalent to Root followed by the parser name (for example, Root.XMLNS), which you can use if you prefer. You cannot use the Body correlation name in a DatabaseInput node.
You must use these different correlation names because there is only one message to which to refer in a Database or Filter node; you cannot create an output message in these nodes. Use a Compute node to create an output message.


Example:

CREATE FILTER MODULE HTTPInputMessageFlow_Filter
    CREATE FUNCTION Main() RETURNS BOOLEAN
    BEGIN
        
        DECLARE space1 NAMESPACE 'namespace1';
        IF Body.space1:root.space1:example = 'ABCDE' THEN
            RETURN TRUE;
        ELSE
            RETURN FALSE;
        END IF;
    END;
END MODULE;



Wednesday, February 26, 2020

IBM Integration Bus- DFDL in ESQL

To parse  binary to XML in ESQL:

CREATE LASTCHILD OF OutputRoot DOMAIN('DFDL') PARSE(binaryData ENCODING InputRoot.Properties.Encoding TYPE '{Namespace}:RootElement');

To convert XML to binanry in ESQL:
We have to prepare the DFDL in OutputRoot like:

CREATE LASTCHILD OF OutputRoot DOMAIN('DFDL') NAME 'DFDL';
DECLARE ns NAMESPACE 'Namespace';
SET OutputRoot.DFDL.Namespace:RootElement=...

The Output is auto convert to binary
DECLARE bPayload BLOB ASBITSTREAM(InputRoot.DFDL.HRRecords CCSID InputRoot.Properties.CodedCharSetId ENCODING InputRoot.Properties.Encoding);

IBM Integration Bus - Working with MQ ( request and response with same correlId).

We have the service via couple queue, We send request to Input Queue and get the response from Out Queue with the same correlId.
The code for "set message to queue" compute node
CREATE COMPUTE MODULE HTTPInputMessageFlow_set_message_to_queue
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
-- CALL CopyMessageHeaders();
-- CALL CopyEntireMessage();
SET OutputRoot.BLOB.BLOB = InputRoot.BLOB.BLOB;

RETURN TRUE;
END;

CREATE PROCEDURE CopyMessageHeaders() BEGIN
DECLARE I INTEGER 1;
DECLARE J INTEGER;
SET J = CARDINALITY(InputRoot.*[]);
WHILE I < J DO
SET OutputRoot.*[I] = InputRoot.*[I];
SET I = I + 1;
END WHILE;
END;

CREATE PROCEDURE CopyEntireMessage() BEGIN
SET OutputRoot = InputRoot;
END;
END MODULE;

The code for "set correlId" compute node
CREATE COMPUTE MODULE HTTPInputMessageFlow_set_correlId
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
-- CALL CopyMessageHeaders();
-- CALL CopyEntireMessage();
SET OutputRoot.MQMD.CorrelId = InputLocalEnvironment.WrittenDestination.MQ.DestinationData.correlId;

RETURN TRUE;
END;

CREATE PROCEDURE CopyMessageHeaders() BEGIN
DECLARE I INTEGER 1;
DECLARE J INTEGER;
SET J = CARDINALITY(InputRoot.*[]);
WHILE I < J DO
SET OutputRoot.*[I] = InputRoot.*[I];
SET I = I + 1;
END WHILE;
END;

CREATE PROCEDURE CopyEntireMessage() BEGIN
SET OutputRoot = InputRoot;
END;
END MODULE;

IBM Integration Bus- ESQL Code Snippets

ESQL Code Snippets

1. Convert XMLNSC to BLOB 

Sometimes you may need to convert your payload to a BLOB(Binary Large Object). To be able to accomplish this the ASBITSTREAM function has to be used. The ASBITSTREAM function will return a bit stream representation of your payload. When I need to put my payload to MQ queue I convert it to a BLOB.
 DECLARE myChar CHAR CAST(myBLOB AS CHAR CCSID InputRoot.Properties.CodedCharSetId Encoding InputRoot.Properties.Encoding);  

2. Convert BLOB to CHAR

 DECLARE myChar CHAR CAST(myBLOB AS CHAR CCSID InputRoot.Properties.CodedCharSetId Encoding InputRoot.Properties.Encoding);  

3. Convert CHAR to BLOB

 DECLARE myBlob BLOB CAST( myChar AS BLOB CCSID InputRoot.Properties.CodedCharSetId);  

4. Convert BLOB to XMLNSC 

 CREATE LASTCHILD OF OutputRoot.XMLNSC DOMAIN('XMLNSC') PARSE(myBlob, InputRoot.Properties.Encoding, InputRoot.Properties.CodedCharSetId);  

5. Left Padding

 RIGHT('0000000000' || CAST(field AS CHAR),10);  
The above will left zero pad field to a 10 long field.


6. Nil Element & Namespace Declaration

 SET OutputRoot.XMLNSC.ns:myElement.(XMLNSC.NamespaceDecl)xmlns:"xsi" ='http://www.w3.org/2001/XMLSchema-instance';    
 SET OutputRoot.XMLNSC.ns:myElement.(XMLNSC.Attribute)xsi:nil = 'true';   

7. Convert Payload to a String

 DECLARE myBlob BLOB;  
 SET myBlob = ASBITSTREAM(InputRoot.XMLNSC CCSID InputRoot.Properties.CodedCharSetId ENCODING InputRoot.Properties.Encoding);  
 DECLARE myChar CHAR CAST(myBlob AS CHAR CCSID InputRoot.Properties.CodedCharSetId Encoding InputRoot.Properties.Encoding);  

8. Backup a JSON Payload without losing arrays

 CREATE LASTCHILD OF OutputRoot DOMAIN('JSON') TYPE Name NAME 'JSON';  
 CREATE FIELD OutputRoot.JSON.Data IDENTITY(JSON.Object)Data;  
 SET OutputRoot.JSON.Data = InputLocalEnvironment.Backup.Data;   

9.Pass Parameters to an HTTP request Node 

 OutputLocalEnvironment.Destination.HTTP.QueryString.param1 = 'param1';  
Each parameter must be individually set.

10. Convert BLOB to JSON 

 CREATE LASTCHILD OF OutputRoot DOMAIN('JSON') PARSE(InputRoot.BLOB.BLOB);  

11. Select value from an ESQL array 

This statement allows you to select the name without iterating through OutputLocalEnvironment.Variables.Person[] where the Id value matches the nameId.

 SET name = the(select item fieldvalue(r.Name) from OutputLocalEnvironment.Variables.Person[] as r where r.Id = nameId);   

Install and use xorg-server on macOS via Homebrew

  The instructions to install and use xorg-server on macOS via Homebrew: Install Homebrew (if you haven't already): /bin/bash -c ...