Calculating number of days between dates in DataWeave

It can be very useful to be able to calculate the difference between two dates. However, calculating this difference in the amount of days seemed a little bit easier to achieve in a DataWeave transformation in Mule 3 than it actually was.

The default method

We started by implementing the default method to calculate the difference between to dates by just subtracting one from another. This worked fine, however, the result of this subtraction is not a number. The difference between two dates will be represented by a Period object. In most cases this won’t be a problem. However, in our case we explicitly needed to retrieve the amount of days between two dates.

Even though the Period object contains a getDays() method, this would not provide the data we needed. For example, when the Period is P2M7D (2 months and 7 days), the getDays() method just returns 7, so the months will not be translated to days.

Using the getMonths() and getYears() methods and calculating the days by ourselves also isn’t a solution because, of course, month lengths vary and years could include leap years.

Doing it the Java way

So… Back to the drawing board. The next idea was to calculate the difference the same way we could in Java. This means translating dates to (milli)seconds, subtracting and calculating this value back to seconds, minutes, hours and finally days.

In DataWeave :datetime objects can be translated to seconds (so not to milliseconds like Java!) fairly easy, just by casting them to :number objects. Unfortunately, this does not work for :date objects. So we first have to cast a :date to a :datetime (via a :string) object.

When we have both dates represented as seconds, we’re almost done. Just subtract them and we have the difference in seconds. Then it’s a piece of cake to make the calculation from seconds to days.

Example

I have created a DataWeave transformation which demonstrates both of the situations that were explained earlier:

%dw 1.0
%output application/json

// Date values
%var firstDate = "2018-08-24" as :date { format: "yyyy-MM-dd" }
%var secondDate = "2018-10-31" as :date { format: "yyyy-MM-dd" }

// Calculate dates to seconds
%var firstAsNumber = firstDate as :string { format: "yyyy-MM-dd'T00:00:00Z'" } as :datetime as :number
%var secondAsNumber = secondDate as :string { format: "yyyy-MM-dd'T00:00:00Z'" } as :datetime as :number
---
{
     firstDate: firstDate,
     secondDate: secondDate,
     defaultMethod: {
         difference: (firstDate - secondDate),
         differenceDays: (firstDate - secondDate).days
     },
     javaWay: {
         difference: (secondAsNumber - firstAsNumber),
         differenceDays: (secondAsNumber - firstAsNumber) / 60 / 60 / 24
     }
}

The result of the above transformation can be seen below:

{
    "firstDate": "2018-08-24",
    "secondDate": "2018-10-31",
    "defaultMethod": {
        "difference": "P2M7D",
        "differenceDays": 7
    },
    "javaWay": {
        "difference": 5875200,
        "differenceDays": 68
    }
}
Advertisements

Tracing messages using end-to-end logging in Mule 3

When you need to analyze an issue in Mule, it can be very useful to have an identifier to correlate the log entries. In this blog post, I will describe how we achieved this in Mule.

An important note is that we are using Mule 3. In Mule 4 various things have changes. For example, it’s easier to add target variables to Connectors, without having to wrap them inside a Message Enricher. So if you are using Mule 4, you probably want to consider using this feature.

The default message.correlationId

By default, Mule has a correlationId property available within the Mule message. At the start of the search of solutions, this (obviously) looked like a pretty good solution for storing correlation data. However, during the research we found some issues with this property.

One of the biggest benefits of this property, is that it is available by default within the Mule message variable. Also, when another Mule flow is called via an outbound HTTP Transport Barrier, the correlationId will be preserved.

When the Mule flow reaches an outbound HTTP Transport Barrier, the correlationId will be passed via the MULE_CORRELATION_ID HTTP header. On the other side, the inbound HTTP Transport Barrier will also automatically put the MULE_CORRELATION_ID (back) in the correlationId of the message. Sounds perfect for what we want to achieve, doesn’t it? Well, that was also what we thought. Until we reached the first outbound HTTP Transport Barrier that directed to an external webservice instead of another Mule flow. This external webservice obviously doesn’t return the MULE_CORRELATION_ID header in the response, which unfortunately causes the behavior that the message.correlationId is set to null.

To test the above, I have created 2 simple Mule flows. The example flow is the entry point. This flow contains 2 outbound HTTP Transport Barriers. The first one points to another Mule flow. The second one points to an external webservice. In this case, this is the Google search page.

The Init correlation ID steps both contain the following Groovy script to set the correlationId to the message unique ID if it’s not set before. The reason why we also include the return payload, is that the payload is overwritten by the outcome of the Groovy script. So without the return statement, the message payload would be overwritten with the correlationId.

message.correlationId = (message.correlationId != null ? message.correlationId : message.id);
return payload;

The reason we do this in a Groovy script instead of an Expression, is that the message variable within an Expression is represented by an org.mule.el.context.MessageContext object. This class unfortunately only contains a getter for the correlationId. Within a Groovy script, the message variable is represented by an  org.mule.api.MuleMessage object, which contains both a getter and a setter for the correlationId property.

1

Executing the example flow, will result in the following log messages:

INFO  2018-10-15 18:20:13,081 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] org.mule.api.processor.LoggerMessageProcessor: Example - 1: messageId=4fadcef0-d097-11e8-88cf-34f39ac37765, correlationId=4fadcef0-d097-11e8-88cf-34f39ac37765

INFO  2018-10-15 18:20:13,097 [[mule-trace-example].HTTP_Listener_Configuration.worker.02] org.mule.api.processor.LoggerMessageProcessor: OtherFlow - 1: messageId=4fb6f6b0-d097-11e8-88cf-34f39ac37765, correlationId=4fadcef0-d097-11e8-88cf-34f39ac37765

INFO  2018-10-15 18:20:13,106 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] org.mule.api.processor.LoggerMessageProcessor: Example - 2: messageId=4fadcef0-d097-11e8-88cf-34f39ac37765, correlationId=4fadcef0-d097-11e8-88cf-34f39ac37765

INFO  2018-10-15 18:20:13,276 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] org.mule.api.processor.LoggerMessageProcessor: Example - 3: messageId=4fadcef0-d097-11e8-88cf-34f39ac37765, correlationId=null

As you can see, the correlationId is lost right after the HTTP to Google step. A solution for this, which is mentioned on multiple blog or forum posts, is to wrap the outbound Transport Barriers in a Message Enricher or Async component. Both result to the following log messages:

INFO  2018-10-15 18:21:48,770 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] org.mule.api.processor.LoggerMessageProcessor: Example - 1: messageId=6a6fbe20-d096-11e8-88cf-34f39ac37765, correlationId=6a6fbe20-d096-11e8-88cf-34f39ac37765

INFO  2018-10-15 18:21:48,858 [[mule-trace-example].HTTP_Listener_Configuration.worker.02] org.mule.api.processor.LoggerMessageProcessor: OtherFlow - 1: messageId=6ab070a0-d096-11e8-88cf-34f39ac37765, correlationId=6a6fbe20-d096-11e8-88cf-34f39ac37765

INFO  2018-10-15 18:21:48,881 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] org.mule.api.processor.LoggerMessageProcessor: Example - 2: messageId=6a6fbe20-d096-11e8-88cf-34f39ac37765, correlationId=6a6fbe20-d096-11e8-88cf-34f39ac37765

INFO  2018-10-15 18:21:49,088 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] org.mule.api.processor.LoggerMessageProcessor: Example - 3: messageId=6a6fbe20-d096-11e8-88cf-34f39ac37765, correlationId=6a6fbe20-d096-11e8-88cf-34f39ac37765

Even though using an Async or Message Enricher is a solution for preserving the correlationId, this would mean that in our case we had to refactor a lot of existing flows, which was also not very desirable.

What about using a sessionVar?

The next idea we had, was to use a sessionVar to track the correlation ID. However, when testing this in a flow, we noticed that the sessionVar was not propagated when crossing an HTTP Transport Barrier to another Mule flow.

When we have a look at the MuleSoft documentation, we can read that this is as designed: “Session variables can be easily propagated from one flow to another through the VM transport, or a flow reference, but not through the HTTP Connector.”

Also, sessionVars are removed in Mule 4, so this is probably not the way to go.

Using a custom flowVar

The next option was using a custom flowVar. This can be set using the same logic as we were using earlier (so by initializing it via the message unique ID. The main disadvantage of this approach, is that when crossing Transport Barriers, this flowVar is not automatically preserved, which requires extra steps to manually add (and use) these headers:

2

In the above example, the Init trace ID steps set a flowVar named traceId using the below expression. As you can see, additional to the logic we used earlier for the correlationId, we also check for the a value of the TRACE_ID_HEADER header. The same header will be set in the Copy trace ID to HTTP headers step prior to calling the otherFlow.

#[message.inboundProperties['trace_id_header'] != null ? message.inboundProperties['trace_id_header'] : message.id]

The above example will lead to the following log entries.

INFO  2018-10-15 18:21:55,162 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] org.mule.api.processor.LoggerMessageProcessor: Example - 1: messageId=6e7010b0-d096-11e8-88cf-34f39ac37765, traceId=6e7010b0-d096-11e8-88cf-34f39ac37765

INFO  2018-10-15 18:21:55,170 [[mule-trace-example].HTTP_Listener_Configuration.worker.02] org.mule.api.processor.LoggerMessageProcessor: OtherFlow - 1: messageId=6e75b600-d096-11e8-88cf-34f39ac37765, traceId=6e7010b0-d096-11e8-88cf-34f39ac37765

INFO  2018-10-15 18:21:55,178 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] org.mule.api.processor.LoggerMessageProcessor: Example - 2: messageId=6e7010b0-d096-11e8-88cf-34f39ac37765, traceId=6e7010b0-d096-11e8-88cf-34f39ac37765

INFO  2018-10-15 18:21:55,297 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] org.mule.api.processor.LoggerMessageProcessor: Example - 3: messageId=6e7010b0-d096-11e8-88cf-34f39ac37765, traceId=6e7010b0-d096-11e8-88cf-34f39ac37765

As you can see, the traceId is available from the start to the end of the flow now.

Wrapping it up in a custom connector

To prevent that a lot of flows should contain duplicate code and increase maintainability in case we want to add extra trace data later, we decided to write a custom Mule connector, which can be used for these steps. The source of this custom connector is available via Bitbucket.

If you clone the source from Bitbucket and open the project in Anypoint Studio via File > Import… >  Anypoint Connector Project from External Location (this requires the Anypoint DevKit Plugin to be installed), you should be able to build it by right clicking the project > Anypoint Connector > Install or update.

Now, if you search for Trace in the Mule Palette, you should be able to drag the custom connector to the flow. This Trace component contains 3 operations:

  • Initialize trace ID
    This operation replaces the Init trace ID steps from the custom flowVar setup. It is possible to force a default trace ID via the optional Default Trace Id parameter.
  • Initialize trace outbound properties
    This operation replaces the Copy trace ID to HTTP headers step from the custom flowVar setup.
  • Log message with trace data
    This operation “replaces” the Log steps from the previous setups. This method will automatically print the message unique ID and the trace ID with the actual message. Besides the log message, you can optionally specify the severity level, log phase and meta data.

3

Now, the result will look a little bit different, but it still contains the same information:

INFO  2018-10-15 18:22:02,874 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] nl.whitehorses.mule.trace.TraceConnector: {
        "messageId": "73013190-d096-11e8-88cf-34f39ac37765",
        "traceId": "73013190-d096-11e8-88cf-34f39ac37765",
        "flow": "mule-connector-example",
        "phase": "START",
        "metaData": {
                "serial": "dummy1234"
        },
        "message": "Example - 1"
}

INFO  2018-10-15 18:22:02,884 [[mule-trace-example].HTTP_Listener_Configuration.worker.02] nl.whitehorses.mule.trace.TraceConnector: {
        "messageId": "730e9f10-d096-11e8-88cf-34f39ac37765",
        "traceId": "73013190-d096-11e8-88cf-34f39ac37765",
        "flow": "mule-connector-otherFlow",
        "phase": "START",
        "message": "OtherFlow - 1"
}

INFO  2018-10-15 18:22:02,887 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] nl.whitehorses.mule.trace.TraceConnector: {
        "messageId": "73013190-d096-11e8-88cf-34f39ac37765",
        "traceId": "73013190-d096-11e8-88cf-34f39ac37765",
        "flow": "mule-connector-example",
        "phase": "AFTER_TRANSPORT",
        "message": "Example - 2"
}

INFO  2018-10-15 18:22:03,010 [[mule-trace-example].HTTP_Listener_Configuration.worker.01] nl.whitehorses.mule.trace.TraceConnector: {
        "messageId": "73013190-d096-11e8-88cf-34f39ac37765",
        "traceId": "73013190-d096-11e8-88cf-34f39ac37765",
        "flow": "mule-connector-example",
        "phase": "END",
        "message": "Example - 3"
}

Conclusion

Even though Mule has a correlationId variable wrapped in the Mule message, this might not be a sufficient solution in every case. Using a sessionVar won’t work as well, because sessionVars will not be propagated when crossing HTTP Transport Barriers.

Another solution is to create your own flowVar for tracing purposes. The biggest disadvantage of this approach is that you’ll need to implement a mechanism to pass the trace data over Transport Barriers as well.

We have chosen to implement the flowVar solution, but implemented the required logic in a custom Mule connector to prevent unnecessary duplicate code and for better maintainability.

In case you want to try the examples from this blog yourself, you can download the example flows here.

Logging error responses in MuleSoft Scatter-Gather

The Scatter-Gather component in MuleSoft is a very nice component when you want to execute several steps in parallel. However, when an error occurs in one of the parallel flows, for example when a webservice returns a HTTP 500 response, you probably want to be able to log not only the error message, but also the error payload/body so you’re able to analyze the issue(s). After some debugging and research, it turned out to be fairly easy to achieve this using 3 steps in the Catch Exception Strategy.

The first step is to copy the exception variable to a flowVar, otherwise you will not be able to use this variable from the Transform Message in the next step.

Copy exception to flowVars

Next, you will be able to build the actual log message using a Transform Message. In our case, we chose to include the following data in the log message:

  • The message from the exception. This is the top level error message.
  • The Element from the info HashMap. This contains information about where the error occurred in the flow file.
  • Then we will check if the exception also contains an exceptions This is the case when the error(s) occur within one or more of the Scatter-Gather flows. Because we first check if this property exists, the exception handler is still compatible with faults that occur outside the Scatter-Gather as well. From the inner exceptions, we will extract the message and detailedMessage, but more interesting is that we can also access the MuleMessage from the Scatter-Gather flow, and select the (error) payload(s) from this.
  • Finally we will also log the value of the payload at the moment the error occurred.

Build log entry

Finally we will actually log the message that we have build in the previous step and do the rest of our error handling, like setting the error response to the payload and returning an HTTP 500 code. The entry in the logfile will look something like this

{
  "error": "Unexpected error occurred",
  "message": "Exception(s) were found for route(s): \r\n\t1: Response code 500 mapped as failure.\r\n\t4: Response code 500 mapped as failure.",
  "element": "/logging-scatter-gather-fault-payloads/processors/0 @ logging-scatter-gather-fault-payloads:logging-scatter-gather-fault-payloads.xml:24 (Scatter-Gather)",
  "causedBy": [
    {
      "message": "route number 1 failed to be executed. Failed to route event via endpoint: org.mule.module.http.internal.request.DefaultHttpRequester@2ae745a0.",
      "detailMessage": "org.mule.module.http.internal.request.ResponseValidatorException: Response code 500 mapped as failure.",
      "payload": "An 'unexpected' error occurred!"
    },
    {
      "message": "route number 4 failed to be executed. Failed to route event via endpoint: org.mule.module.http.internal.request.DefaultHttpRequester@2c10840e.",
      "detailMessage": "org.mule.module.http.internal.request.ResponseValidatorException: Response code 500 mapped as failure.",
      "payload": "An 'unexpected' error occurred!"
    }
  ],
  "payload": null
}

Be aware that MuleSoft (Java) starts counting on 0, so in the above example, the second and fifth flow of the Scatter-Gather component (which is specified on line 24 of the flow XML file) have failed. We can verify this behavior if we check the demo implementation of the flow:

Demo implementation

The Struggles of Personalization in ADF

A Mike Heeren & Richard Olrichs co-production

ADF comes with the out-of-the-box features of personalization. This means that whenever you configure personalisation, users can persist changes they make to the application across sessions and personalize their experience with the application. We have seen that this feature can also confuse some of our users, so it is not always wise to use this. It depends on the use case you have. However, when recently implementing personalization on an ADF 12.2.1.2 application, we had a couple of issues regarding persisting these personalizations to the MDS.

We felt that most of the blogs we came across while implementing these features, share the joyful out-of-the-box configuration. Just select some of the checkboxes in the properties and you are done, ready to enjoy your beer and have your designers and product owners cheer for you.
Sometimes however, real life applications at customers do not match the out-of-the-box configuration and things can become a little bit more tricky than you might expect.

Let’s start at the top, and go through some of the steps you will always need within your application for personalization to work. You need to have authentication and authorization set up. If you also want to follow our struggles, we have posted a sample application at the end of this blog to show both the problems as well as the solutions.

Getting started with customizations

The basic configuration of customization in ADF is pretty simple. We start with a simple ADF application (with authentication already configured), and select the Project Properties > ADF View. Here we check Enable user customization and Across sessions using MDS, as seen below:

When enabling user customizations, the following files are edited:

  • In the adf-config.xml file the following lines are added:
<adf-faces-config xmlns="http://xmlns.oracle.com/adf/faces/config">
	<persistent-change-manager>
		<persistent-change-manager-class>oracle.adf.view.rich.change.MDSDocumentChangeManager</persistent-change-manager-class>
	</persistent-change-manager>
</adf-faces-config>
  • In the web.xml file the javax.faces.FACELETS_RESOURCE_RESOLVER context-param is changed from oracle.adfinternal.view.faces.facelets.rich.AdfFaceletsResourceResolver to oracle.adfinternal.view.faces.facelets.rich.MDSFaceletsResourceResolver, and the following blocks are added:
<servlet>
	<servlet-name>adflibResources</servlet-name>
	<servlet-class>oracle.adf.library.webapp.ResourceServlet</servlet-class>
</servlet>
...
<servlet-mapping>
	<servlet-name>adflibResources</servlet-name>
	<url-pattern>/adflib/*</url-pattern>
</servlet-mapping>
...
<filter>
	<filter-name>ADFLibraryFilter</filter-name>
	<filter-class>oracle.adf.library.webapp.LibraryFilter</filter-class>
</filter>
...
<filter-mapping>
	<filter-name>ADFLibraryFilter</filter-name>
	<url-pattern>/*</url-pattern>
	<dispatcher>FORWARD</dispatcher>
	<dispatcher>REQUEST</dispatcher>
</filter-mapping>
...
<context-param>
	<param-name>oracle.adf.jsp.provider.0</param-name>
	<param-value>oracle.mds.jsp.MDSJSPProviderHelper</param-value>
</context-param>
<context-param>
	<param-name>org.apache.myfaces.trinidad.CHANGE_PERSISTENCE</param-name>
	<param-value>oracle.adf.view.rich.change.FilteredPersistenceChangeManager</param-value>
</context-param>
  • Finally, the following block will be added to the project .jpr file:
<hash n="oracle.adfdtinternal.view.rich.setting.ADFViewSettings">
	<value n="ENABLE_ADF_WEBAPP_LIB_SUPPORT" v="true"/>
	<value n="KEY_ENABLE_USER_CUSTOMIZATIONS" v="2"/>
</hash>

After this we need to configure the adf-config.xml file and add the oracle.adf.share.config.UserCC Customization Class on the MDS tab. This is a default customization class that is shipped with ADF. You can also write your own, but that is more of a use case for customization than it is for personalization. In our case we’ll just configure the application using the UserCC:

Now you need to configure the components that you want the end user to be able to personalize. This can be done in the View tab. Select the ADF Faces Components Tag Library, because we want to personalize the table and column components, these are default component from ADF Faces. By default all attributes will be persisted, you can uncheck them if you do not wish to persist these:

After the above steps are configured, you can give a fancy demo on your demo application, and everybody is happy. However, in our production application, there were still a few issues to tackle, we struggled with some of these.    

Struggle 1: Task flows from libraries combined with a file based MDS on a Windows machine.

Our application was not a simple MVC application, but like many application out there, we used ADF libraries to include taskflows from library projects and included them in a bigger main application. When we turned on personalisation on components from task flows which come from libraries instead of directly from the application, they were not correctly persisted to the file based MDS on Windows machines.

When running the application via JDeveloper (in our example application by right clicking default.jsf in the ViewController project, and selecting Run), we see that the task flow that comes directly from the ViewController project (the table on the left of the screen), behaves as expected. However, when personalizing the table from the task flow that comes from the imported library (so from the ViewControllerLibrary project), we see the following warning in the log files.

<Apr 5, 2018, 10:44:26,157 AM CEST> <Warning> <oracle.adf.view.rich.change.MDSDocumentChangeManager> <BEA-000000> <Attempt to persist a DocumentChange failed : MDS-02401: The operation ModifyAttribute on the column node is not allowed.    

oracle.mds.exception.MDSRuntimeException: java.net.MalformedURLException: no !/ in spec    

java.lang.NullPointerException: no !/ in spec>

We can verify that the preferences were persisted to the MDS for the left table, but not for the right table, by opening the application (with the same user) in another browser:

Unfortunately, this issue occurs when using task flows from libraries in combination with using a file based MDS on a Windows machine. Weblogic is not able to create a file path, which contains !/ (which is used to indicate that the resource is part of a library).

Luckily, in our case all other DTAP environments don’t use a file based MDS, but a database MDS. In the database MDS, we don’t have the file path issues, so there we won’t see this issue. So on all environments except the (local) Integrated WLS, we don’t see these warning logs, and we will see both table personalizations being persisted. It might take you some time to realise this, if you do not want to deploy to Dev or Test without having the Personalization working on your local machine.

Struggle 2: Deploying the application as EAR instead of via JDeveloper

Deploying both from JDeveloper as well as creating an EAR file seems to work. However, there was still some struggle there as well. The personalisation was working when deploying via JDeveloper, but when we would build in EAR from the application via JDeveloper, and deploy it manually to the Integrated WLS via the console, none of the settings were persisted to the MDS, and we saw the following warning in the log files:

<Apr 5, 2018, 2:58:12,545 PM CEST> <Warning> <oracle.adf.view.rich.change.DocumentUtils> <BEA-000000> <ADFv: Trouble getting the mutable document from MDS. MDS-01273: The operation on the resource /WEB-INF/view-from-library.jsff.xml failed because source metadata store mapped to the namespace / DEFAULT is read only..>

By opening the application in different browsers again, we can also confirm that neither of customizations the tables is persisted in the MDS now:

When Personalization across sessions with the MDS is configured, JDeveloper always creates a metadata store usages in the adf-config file at deployment time. This configuration is named MAR_TargetRepos.

It does not matter that we have configured a different metadata store usages within the adf-config. If the MAR_TargetRepos is not present while creating the EAR file, it will be added to the adf-config file. The only solution that we found to this, is by naming our metadata store usages to match the expected default, then it will not override or add anything.

We added the following snippet to the adf-config/adf-mds-config/mds-config tag in adf-config.xml:

<persistence-config>
	<metadata-namespaces>
		<namespace path="/persdef" metadata-store-usage="MAR_TargetRepos"/>
	</metadata-namespaces>
	<metadata-store-usages>
		<metadata-store-usage id="MAR_TargetRepos" default-cust-store="true">
			<metadata-store class-name="oracle.mds.persistence.stores.file.FileMetadataStore">
				<property name="metadata-path" value="${TEMP}"/>
				<property name="partition-name" value="PersDef"/>
			</metadata-store>
		</metadata-store-usage>
	</metadata-store-usages>
</persistence-config>

Note that we did not set deploy-target to true, because if we do this, our custom MAR_TargetRepos will be overridden with the default implementation again, when building the EAR file. This default implementation does not contain the FileMetadataStore implementation:

<metadata-store-usage id="MAR_TargetRepos" deploy-target="true" default-cust-store="true"/>

Because we want to override the default MAR_TargetRepos, so the personalization will also work when deploying the EAR instead of deploying via JDeveloper, we do not set the deploy-target to true so the default (false) will be used. In this case the FileMetadataStore configuration will be preserved in the EAR file.

If we deploy the new EAR file, we see the same behaviour as we did when deploying it via JDeveloper. Also, when we use the personalization functions on the screen, we will see files being created within the PersDef folder within the %TEMP% environment variable.

This FileMetadataStore configuration is changed (back) to a DBMetaDataStore configuration by an ANT build script, before deploying on the different DTAP environments instead of the Integrated WLS environment. An example of such ANT script can be found below.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="antlib:org.apache.tools.ant">
	<property environment="env"/>
	<property name="ear.location" location="/path/to/your/deployment.ear"/>
	<!-- Classpath -->
	<path id="wlst.classpath">
		<fileset file="${env.WLS_HOME}/modules/features/wlst.wls.classpath.jar"/>
	</path>
	<!-- WLST task definition -->
	<taskdef name="wlst" classname="weblogic.ant.taskdefs.management.WLSTTask" classpathref="wlst.classpath"/>
	<!-- Target to replace persDef repository -->
	<target name="replace-persdef-repo">
		<echo message="Updating MDS config for [${ear.location}]..."/>
		<wlst failonerror="true" debug="false" classpathref="wlst.classpath">
			<arg file="${ear.location}"/>
			
			archive = getMDSArchiveConfig(fromLocation = sys.argv[0])
			archive.setAppSharedMetadataRepository(
				namespace='/persdef',
				repository='mds-owsm',
				partition='PersDef',
				type='DB',
				jndi='jdbc/mds/owsm'
			)
			archive.save()
			
		</wlst>
		<echo message="Updating MDS config done!"/>
	</target>
</project>

Be sure that the WLS_HOME environment variable is set in the system environment variables, and the ear.location property is replaced in the ANT file when you want to use the above example.

Struggle 3: Suddenly our application does persisting during the session.

We have configured the adf-config to persist only certain components and attributes across the session. This works very nice and clear, however, suddenly all the other components also persist their state, just not across the session, but during the session.

It is possible that this is not what you want, it certainly was not what we expected or had in mind for our application, but there is nothing much we can do about it. It would have made more sense to turn this off for all the components and only persist those that were configured to be persisted.

Luckily the ADF components have an attribute persist and dontPersist on them. When reading the documentation on these attributes, it sounds exactly like what we need for our application! Before adding the dontPersist attribute to the hundreds of components we have in the application, we decide to test it on a couple. What we found out was very unpleasing, basically these attributes could be used for documentation purpose or for fun, but it certainly did nothing concerning persistence.

We decided to create our own custom class to adjust the framework and get this working. To achieve this, the context-param org.apache.myfaces.trinidad.CHANGE_PERSISTENCE can be adjusted to a custom class.

At first we tried to extend the oracle.adf.view.rich.change.FilteredPersistenceChangeManager class, which is the class ADF uses by default. However, unfortunately this class is declared final. We decided to create a class that extends the org.apache.myfaces.trinidad.change.SessionChangeManager class, and use the FilteredPersistenceChangeManager as an instance variable. This may not be the prettiest solution, but it serves our purpose:

package nl.whitehorses.personalization.changemanager;

import javax.faces.component.UIComponent;
import javax.faces.context.FacesContext;

import oracle.adf.view.rich.change.FilteredPersistenceChangeManager;

import org.apache.myfaces.trinidad.change.AttributeComponentChange;
import org.apache.myfaces.trinidad.change.ChangeManager;
import org.apache.myfaces.trinidad.change.ComponentChange;
import org.apache.myfaces.trinidad.change.DocumentChange;
import org.apache.myfaces.trinidad.change.SessionChangeManager;

public class CustomChangeManager extends SessionChangeManager {

	private final FilteredPersistenceChangeManager fpcmInstance = new FilteredPersistenceChangeManager();

	@Override
	public void addComponentChange(final FacesContext context, final UIComponent component, final ComponentChange change) {
		if (component == null || component.getAttributes() == null) {
			return;
		}
		final String[] persistArray = (String[]) component.getAttributes().get("persist");
		if (persistArray == null) {
			return;
		}
		for (final String persistVal : persistArray) {
			if (persistVal != null && change instanceof AttributeComponentChange && ("ALL".equals(persistVal) || ((AttributeComponentChange) change).getAttributeName().equals(persistVal))) {
				fpcmInstance.addComponentChange(context, component, change);
			}
		}
	}

	@Override
	public void addDocumentChange(final FacesContext context, final UIComponent component, final DocumentChange change) {
		fpcmInstance.addDocumentChange(context, component, change);
	}

	@Override
	public boolean supportsDocumentPersistence(final FacesContext context) {
		return fpcmInstance.supportsDocumentPersistence(context);
	}

	@Override
	public ChangeManager.ChangeOutcome addDocumentChangeWithOutcome(final FacesContext context, final UIComponent component, final DocumentChange change) {
		return fpcmInstance.addDocumentChangeWithOutcome(context, component, change);
	}

}

As you can see, the logic for the persist attribute has been implemented in the addComponentChange method. The addComponentChange method from the FilteredPersistenceChangeManager instance, will only be called when the component contains the persist attribute which is set to ALL. Besides the addComponentChange method, all other (public) methods from the FilteredPersistenceChangeManager have been implemented to use the instance variable as well.

Conclusion

After some struggles and adjustments to the implementation and configuration of the application, we got personalization to work in our real world application used by customers. We overcame the struggles, but this was not as easy as the blogs on the internet made us believe beforehand. We hope that sharing this experience, might save you for some of the troubles we had.

To give you some more insight in the code and the struggles, we have created a (simple) demo application AdfPersonalization, to reproduce the issues we had, and which we used to solve them.

This application consists of the default Model and ViewController projects. Also, we added a ViewControllerLibrary project. This ViewControllerLibrary project is imported as a library by the ViewController project.

The AdfPersonalization application can be deployed to Weblogic in multiple ways:

  • Using the ‘Run’ button in JDeveloper.
  • Using Application > Deploy > … to IntegratedWebLogicServer in JDeveloper. This can be done to verify that struggle 2 is no longer an issue.
  • Using Application > Deploy > … to EAR, followed by running the replace-persdef-repo ANT target from the build.xml file. Afterwards the EAR can be deployed to a Weblogic servers, which is capable using a database based MDS. This can be done to verify struggle 1 is no longer an issue.

The source of this project can be downloaded via AdfPersonalization.zip.

Resources

Progressive Web Apps – bridging the gap between apps and websites

Apps, websites, mobile sites

For a long time, developers had to choose whether they would build a (mobile) web site or an app to reach their audience. Both have their own set of advantages and disadvantages. Web sites are platform independent, but lack functionality and cannot be used offline. Apps deeply integrate with the device on which they are installed but have to be developed for each platform and are not as easily updated.

Progressive Web Apps

The solution for that is called a Progressive Web App. Basically, it’s a website with added functionality. And there are a number of featues that a PWA requires.

First, a PWA-enabled website should contain an SSL certificate in order to prevent Man in the Middle (MitM) attacks between the app and the backend.

Sceond, it contains a JSON file called a Web Application Manifest. This file holds information about the name of the application, links to the web app icons or image objects and details about the splash screen.

Last but not least, a PWA has a feature called service workers. These are little JavaScript programs who enable push notifications, background data loading and caching. But wait, there’s more. They also provide access to important features of your mobile device like location, camera and things like Apple Pay. And another thing, web features like WebAssembly (for executing near-native code in the webbrowser) are possible with service workers too. This property of the PWA truly bridges the gap between web pages and app features.

Are we there yet?

Although the technology is promising, there are still some hurdles to be taken. Not all features of all mobile devices are supported yet. So is Touch ID not useable without an extra layer like Phone Gap or Cordova. Also, push notifications do not work on iOS devices because background processes are required for service workers and not supported (yet).

The word “progressive” in PWA stands for the idea that although not all functionality is supported by every device, the features that are enabled will work anyway, without breaking the entire app. So if push notification do not work, the site is still able to cache and refresh data. Basically, it’s the next step after progressive web sites, that enabled developers to build one site for many different screen resolutions and input types.

If PWA’s are going to be the next big thing is still a question. But they will pave the way for a smoother user experience across devices and a more generic design of apps anyway. The gap between native apps and websites has become narrower, although still not fully bridged.

The MongoDB ObjectId explained in under 2 minutes

As you may have noticed, especially when your background is from a MySQL or Oracle database, is that the unique identifier for database records (or ‘documents’ in MongoDB) is quite different. It’s not a incremental counter, but a long string of characters. 12 bytes in hexadecimal format, to be precise. And although they appear to contain no information, they actually do. This is how an ObjectId is composed.

Let’s take an ObjectId:

507f1f77bcf86cd799439011

The first part is a 4-byte value representing the seconds since the Unix epoch (hence, it’s a timestamp!), the next part is a 3-byte machine identifier unique for every machine, then a 2-byte containing the process id, and finally a 3-byte counter, starting with a random value.

This combination of values (timestamp, machine, process, counter) guarantees a unique value for the ObjectId.

Because the timestamp is included, it’s possible to extract it from the Id. That’s fairly simple, because Mongo provides a function for that:

ObjectId("507c7f79bcf86cd7994f6c0e").getTimestamp()

returns
ISODate("2012-10-15T21:26:17Z")

Just as simple is extracting the stringvalue:

ObjectId.valueOf(("507c7f79bcf86cd7994f6c0e")
 returns 507c7f79bcf86cd7994f6c0e See the MongoDB reference for more examples.

 

Node.js tutorial part 2; building a proper website

Your first fully functional Node.js application

In the first part of this tutorial we set up a development environment. As you have noticed there, the core of a Node.js application is a server side Javascript file. You will be editing this file a lot and along the while it will grow accordingly.

I think the first step you will take, is change your code so that node.js serves a proper website. This can be an entirely new site, but without much effort an existing one too.

First things first. npm is the tool we used for creating a first application. When installing packages with npm, you have different options for installing them. With npm -g <packagename> you install global (system-wide) packages, without the -g packages are installed into the current directory/project.

When we’ re on it , install MongoDB on your system. We’re going to use it in our application as a database. Installation is pretty straightforward on most systems, this site is an excellent starting point.

There are a number of npm packages that are of huge value when creating node.js applications, but the choice is enormous. The following list (in no particular order) of packages is especially invaluable:

  • path
  • express
  • mongoose
  • body-parser

Let’s see how we use this in a simple but functional Node.js application. Create a new directory for your project and run

npm init

The wizard is pretty self-explanatory. Now we install the necessary npm packages. Install them using these commands:

npm install express --save
npm install express-session --save
npm install body-parser --save
npm install mongoose --save

To make things easy for you, here’s a code snippet you can paste into the index.js at the root of your project:

// initiate express middleware
const express = require('express');
const app = express();

// assign the directory 'public' for storing static 
// files (e.g. HTML and CSS)
app.use(express.static('public'));

// enable the parsing of request bodies and 
// simplified handling of JSON content
const bodyParser = require('body-parser');
app.use(bodyParser.urlencoded({extended: true}));

// initiatie Mongoose for MongoDB connectivity
const mongoose = require('mongoose');
mongoose.connect('mongodb://localhost/rodeo', {
   keepAlive: true,
   reconnectTries: 10
});

// define a schema for a mongoDB collection
const Schema = mongoose.Schema;

var transactieSchema = new Schema({
   userName: {type: String},
   transactieDatum: {type: Date},
   code: {type: String},
   aantal: {type: Number},
   bedrag: {type: Number}
   //  _id: false
},
   {collection: 'transacties'});
mongoose.model('transacties', transactieSchema);

// Do some neat forwarding of the root of your site 
// to a default file
app.get('/', (req, res) => res.sendfile('index.html'));

// a function for handling a post request
postTransacties = function (req, res, next) {
   console.log(req.body);
   var transacties = mongoose.model('transacties');
   var transactie = new transacties(req.body);
   transactie.save(function (err) {
      if (err) {
         console.log(err);

      }
      return res.json({value: 'hello'});
   });

};

// a function for handling a get request
getTransacties = function (req, res, next) {
   var transacties = mongoose.model('transacties');
   transacties.find({}, function (err, data) {
      return res.json(data);
   });

};

// routing for post and get on the /transacties url
app.route('/transacties')
   .post(postTransacties)
   .get(getTransacties);

// finally, let's fire up a webserver at port 3500
app.listen(3500, function () {
   console.log('listening on *:3500');
   module.exports = app;
});

This is not the best practice in terms of maintainability (it’s better to keep your database, router and controller-middleware in different files), but it will demonstrate everything you need for a full-fledged Node.js application including posting and retrieving data.

Final thing you need is a new directory “public”. Create an index.hml there with this contents:

<html><head>
https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js

	 $(document).ready(function () {
 $('body').on('click', '.addone', function () {
      var res = {
         userName: $('.naam').val(),
         transactieDatum: new Date(),
         code: $('.code').val(),
         aantal: $('.aantal').val(),
         bedrag: $('.bedrag').val()
      };
      console.log(res);
         $.ajax({
            type: "POST",
            url: "/transacties",
            data: res,
            success: function (result) {
               console.log('succes');
            },
            error: function (jqXHR, textStatus) {
               console.log(textStatus);
            }
         });
      
   });
   });
	
</head>
<body>
Naam
<label for="code" class="col-sm-3 col-form-label">Code</label> <input id="code" name="code" class="form-control code"> </div>
Aantal
<label for="bedrag" class="col-sm-3 col-form-label">Bedrag</label> <input id="bedrag" type="number" name="bedrag" step="0.01" class="form-control bedrag"> </div>
Save
</body> </html>

Now, go to the root of your project and type

node index.js

And when you point your browser to http://localhost:3500 a simple input form will be presented.

If you want to check if everything works as expected, fill in the form, press Save and go to http://localhost:3500/transacties . A JSON document will be displayed with all the records you saved.

That’s all. Now you have a fully working skeleton from where you can start building your enterprise website in Node.js!