Back to blog

Semantic or structured logging is a way to create strongly typed log messages that make logs easier to parse and analyze. In this post I explored how to implement semantic logging in Mendix.

Semantic or structured logging is a way to create strongly typed log messages that make logs easier to parse and analyze. I was introduced to the concept of semantic logging early in my career as a .Net developer and I have used it extensively over the years to create logs that were easy to search, analyze and understand. I have even built a few beautiful and insightful dashboards both for my personal projects as well as for clients on top of semantic logs. In Mendix logging is still done using traditional text based logging, there are some tools to help load these logs into semantic log stores, however this loses most of the benefits of using a semantic log store in the first place. This has led me to the question, can I implement semantic logging in Mendix so that I can enhance the value of the logs captured to a semantic store? As this post will show, the answer is yes!

The tools

The are a wide variety of tools that can be used to store and process semantic logs. A well known enterprise example is Splunk, I've also made use of the ELK Stack but my personal favorite has to be SEQ which I will be showing off in this post. There are a wide variety of ways to send log messages to SEQ, but in .Net I always made use of Serilog which provides a large number of sinks (logging endpoint) and integrates easily with .Net. Mendix runs on top of Java however, so this is not an option, but looking through the SEQ documentation I found that there is a Java port of Serilog called serilogj. Serilogj is not as well rounded as its .Net equivalent but it contains everything I need to build out a PoC.


Because I've worked on several Mendix projects that ran multiple instances per Mendix project I decided that any solution I build would have to work in both a single and multi-instance environment. I also wanted to be able to turn off and on semantic logging while having a fallback to regular Mendix logging. With this design I can use my custom logging module even if the project does not leverage Semantic logging but making it easy to switch it on without having to rebuild all the logging in the project. I also decided to expose several microflows that mirror the serilogj way of logging to make it easy for users to consume. Serilogj is a Java library so custom Java actions are required to handle the logging. Serilogj provides a static Log class so this makes it straighforward to use in Mendix without having to worry about how to keep the instance of our logger. Logging with serilogj requires setting up the logger first and then calling the logger to log the message.


Logging configuration

The first component to build is a Java action that will be called during the after startup microflow to configure the logger.
To get started I created the ASU_SetupSeq Java action which takes two parameters, one to set the SEQ endpoint and another that defines the minimum level to log at.

Logging configuration

The implementation for the action is as follow


The Java logic first retrieves the Mendix server instance Id using Core.getXASId();. It then proceeds to configure the serilogj logger by calling setLogger and configuring the new logger to write to SEQ for the provided endpoint with the minimum log level defined. Finally we set an enrichment property using the with statement and set it to the instance Id we initially retrieved, this is useful to know which instance in the cluster submitted a log entry. Finally we invoke the createLogger method to create the logger. With this our logger is now configured to log to SEQ with each log message having the "Instance" property set to the Id of the instance on which the logger was created.

To round out the logic for setting up the logger I created a new microflow called ASU_ConfigureSeq. This microflow should be hooked into the project's configured after startup flow.


The microflow creates the two parameters that are required by the Java action and then proceeds to call the Java action with those values. For the PoC the configuration is simply created inline but in a production application these values can be pulled either from constants or from the database allowing for easy configuration per environment.


Java actions

To write logs to SEQ we simply have to invoke the appropriate method on the static Log method. Serilogj provides several methods to created logs of different levels but to keep the code as compact as possible I opted to use the write method and pass all the rest of the values as a parameter.
Our first task is to set up an enum that will be used in the modeler when creating logs. This enum contains the same values as serilogj's log level enumeration.


Next we create the follow domain model entities to allow for easy logging of events.


The first entity is our log entry. It contains attributes for setting the log level as well as the template that we will be using for our log message. Templates in structured logging are in the following form.


The log system will keep both the message template as well as a string representation where it will replace the placeholders, {Parameter} in this case, with the parameter values that were defined. As such our model also contains a parameter entity that contains the string value of our parameter. Structured log systems can support a variety of parameter types, but for simplicity's sake, and because serilogj doesn't support a large variety of types we are simply keeping all our parameters as strings. Finally there is an association to the Mendix Error entity which we will use to log an exception in the case of a Warning, Error or Fatal level log (In truth any log level can have an exception associated with it, but these are the levels that usually has one.)

Finally we can define our Java action to actually write our log messages. We pass in several parameters to cater for all of the different information that is needed to log the message.


All the parameters should be self-explanatory. Of interest to note is that I chose to pass the Parameter entity list as well as the ErrorObject separate from the LogEntry entity. I find this to be more explicit as well as easier to deal with in the Java action.

To be able to get the correct log level value we need a method to convert the enumeration we get from Mendix into the one expected by serilogj. To do this I created a separate Java class and defined the following method as a static. In this way we can easily reuse the code to convert the enumeration anywhere in our Java actions.


The method simply contains a switch statement that returns the appropriate LogEventLevel enumeration value based on the received LogLevel enumeration.

Now we can finally implement the logic to write the log event to our SEQ instance.


The method starts by creating an array of Object that is the same size as our Parameters list. It then loops over the parameters and gets the value from each one, writing it into the appropriate position in the array. Next we create a new ILogger instance from our static Log using the forContext method. We use this to define another enricher that specifies which Node wrote the log value, this Node is the same as the Node value used by regular Mendix logs. Finally we check if the ErrorObject is null or not to decide if we want to write the log with or without a Throwable. The write method takes the log level (which we convert using our static method), a Throwable, the message template and finally the list of parameters, however it can be called without the Throwable if one isn't present. In the call where our ErrorObject is not null we create a new Throwable as the error object isn't actually one. We construct the Throwable with the ErrorObject's message and stack trace to replicate a proper Throwable.

We also need a Java action to enable us to log messages to the Mendix logs. To set this up I created a Java action called LogEntry_WriteMxLog with the below parameters.


The parameters are almost the same as what we pass to the LogEntry_Write Java action except for the message which is now just a plain string.

The Java action is implemented like this.


We create a new Mendix ILogNode for the node parameter that has been passed to us. Once we have a log node we simply invoke the log method with the appropriate parameters. The logic is an almost exact match to the LogEntry_Write logic, just leveraging the Mendix logger instead of serilogj.

With this our Java actions to log messages is complete and we are ready to implement the microflows to wrap it.

Private logging methods

To provide the maximum flexibility we provide two microflows to handle the logging by wrapping our Java actions. The first microflow looks like this.


The microflow simply retrieves our Parameter and Error objects via association from the LogEntry entity that is passed as a parameter. Then it invokes the LogEntry_Write Java action with the appropriate parameters. I had planned to add logic to check that the parameter list size matches the number of parameters in the provided message template, but in the end I decided that this isn't a complexity needed for a basic PoC.

The second microflow looks like this.


The microflow first gets the variable to see if it should write a log entry (again, this should come from a constant or the database). If a Mendix log message needs to be written the message template is converted into a message by splicing in the parameters, getting the ErrorObject over the LogEvent's association and finally invoking the LogEntry_WriteMxLog Java action to get the message written to the Mendix logs.


The sub-microflow to replace the template placeholders with the appropriate parameter takes the message template, finds the location of the first placeholder by looking up the location of the first opening and closing brace then using these values to grab the message template before and after the placeholder and splicing the parameter value in between them.

The final microflow looks like this.


This microflow takes all of our parameters (except Error and LogLevel) as string parameters and then creates the appropriate LogEntry for logging. To do this we use the SplitString action from the Community Commons library to split our parameter string into separate strings and create Parameter objects for each. We also create the LogEntry entity and set the appropriate associations. Finally we invoke the LogEntry_Write sub-microflow to handle the actual logging. We also invoke the method to write a Mendix log message.

Logging Microflows

Finally it is time to create the sub-microflows that developers will use to create the actual log messages. As mentioned before, these methods reflect the methods available in Serilog for logging messages.


We provide a sub-microflow to write a log entry for each logging level. The microflows take the parameters similar to the SUB_WriteLog microflow and then simply invoke that flow. In this way developers implementing the log simply has to call the microflow of the appropriate log level without having to pass the log level value as a separate parameter. To change the log level they can simply switch to the appropriate flow without having to change anything else. Of course, it's still possible to call the lower-level SUB_WriteLog method if desired.

Intercepting Mendix logs

Log Subscriber

With the logic above we are now capable of creating all of our application logs as neat, semantic logs. This leaves us a little short though, what about the logs created by Mendix and store modules? Not to fear, there is a solution. First, we have to create a new Java class that implements the Mendix interface LogSubscriber. The logic is as follows.


The class simply extends the LogSubscriber provided by Mendix. We set up our own logger from the static Log with an additional enrichment property called "Logger" and setting the value to "Mendix". In this way we can easily filter out values logged by Mendix in SEQ. Next we override the processMessage method. To implement this method we start by getting the appropriate log level for the Mendix log level. This is resolved using another static Java method similar to the converter described above (left as an exercise for the reader) to map the Mendix log level to a serilogj log level. Next we create a new logger from our subcriberLogger variable and set the "Node" enrichment property to the node from the LogMessage. To get the message we have to check if message in the LogMessage is null or not as Mendix can provide null log messages in some instance (specifically during shutdown), if we receive a null message we simply set it to an empty string, otherwise we get the appropriate string from the message attribute of the LogMessage. Finally we check if the cause (a throwable) is empty or not and then invoke the appropriate write method on our logger. With this, we have a subscriber we can pass to Mendix in order to intercept all Mendix log messages to write them to SEQ.

Hooking the Mendix logs

To actually hook the Mendix logs we need another Java action that we can invoke during our after startup microflow. Create a Java action as follows.


The only parameter we pass to the Java action is the level at which we want to intercept the Mendix logs. For the parameter we use the enumeration for SEQ log levels we created earlier.

The Java action looks as follows.


We simply invoke the registerLogSubscriber method and pass it a new instance of our Java class that extends the LogSubscriber class. To construct this class we provide it with a name as well as the minimum log level we want it to intercept messages for.

With the Java action in place all that remains is to set up the after startup microflow to invoke the Java action. Create a microflow ASU_HookMendixLogging


The microflow simply sets up (for the PoC) the minimum log level and then invokes the Java action to set up the log intercept. You might ask why I am not combining the two after startup microflow, the reason for this is simple, it provides the implementer the ability to only capture their own logs to SEQ or to capture both our and Mendix logs to SEQ.

There are three limitations to capturing Mendix logs to SEQ that needs to be mentioned. The first is that the logs captured from Mendix are simply non-structured data from the perspective of SEQ (besides the enrichments we add), this limitation can potentially be bypassed by providing mappings to create proper semantic logs from the entries, but this is not a trivial task. The second issue is that our logger only starts capturing logs once it's microflow has been invoked in the after startup microflow. As such our after startup flows should be first to ensure we capture details from all other startup flows. The third and final limitation is that Mendix disconnects/stops sending log entries to log subscribers early during node shutdown and as such we can potentially lose some log information during a node's shutdown.

Sadly these limitations are not something that can be overcome without Mendix committing to providing proper, semantic logs and allowing loggers to be hooked up very early and only being dumped very late in the Mendix runtime startup and shutdown process.

Project structure

For the PoC this is the final project structure. There are several additional logging flow but most of the rest of the structure has been shown.


I added the basic logging microflows to a private folder as these are intended for internal use, however as Mendix does not provide an actual private modifier these are open to developers to actually use.

Logs in SEQ

Below is an image capture from SEQ. It includes our own semantic logs as well as logs captured via our log subscriber. As can be seen own logs include parameters (like Microflow or ParamCount) that are semantic log parameters.


In contrast, logs that were intercepted from the Mendix runtime only include the parameters that we appended to the log, the rest is unstructured data.


Note how 'debugger/' isn't an actual parameter.


To capture semantic logs from Mendix is a fairly trivial exercize. Setting up all of the logic for the PoC took me around 4-6 hours. Even if a project didn't use the custom logging microflows from the project is is possible to capture most of the Mendix logs directly to your desired logging application. Sadly serilogj (unlike Serilog) only provides a handful of sinks which means additional sinks (like writing to Splunk) will require additional work to set up and smoothly integrate. Hopefully Mendix will support full semantic logging in a future version, but until then most of the benefits of semantic logging can be realized with the solution presented here.