Skip navigation
All Places > Boomi Buzz > Blog
1 2 3 Previous Next

Boomi Buzz

66 posts

If you’ve ever had to support a production application, you know how important logs can be.  Like most applications, the Dell Boomi Atom generates log messages that are written to files on disk. When problems arise, these logs help you figure out what is going on.  Sometimes troubleshooting is as easy as opening the log file and reviewing the most recent messages.  However, sometimes it’s not. Sometimes you need to correlate the Dell Boomi Atom logs with logs from other applications (possibly on different servers) or observe patterns in logs over time.  Attempting to do this manually can be a daunting task. This is where a centralized logging solution can really help. By centralizing all logging, including the Dell Boomi Atom log, into a single application, you gain the ability to easily search across multiple logs quickly.


I recently spent some time experimenting with a popular open source logging platform called Elastic Stack.  In this blog post we will step through what I did to install the Elastic Stack and how I configured it to ingest container logs.  By the end of this blog, we will have the following setup:

Dell Boomi Atom + Elastic Stack


In order to follow along with this blog, there are a few things you will need to set up first:

  • DockerTo simplify my setup, I chose to install all the Elastic Stack components with Docker.  If you don’t have Docker installed already, you can find information on how to install it at
  • Docker ComposeDocker Compose allows you to coordinate the install of multiple Docker containers.  For my setup, I wanted to install Elasticsearch, Kibana and Logstash all on the same server. Docker Compose made this easy.  You can install Docker Compose by following the steps outlined at
  • AtomI will be demonstrating sending container logs from a single Atom that is setup to use UTC and the en_US locale (the importance of this will be explained later).  I recommend using a fresh Atom so that you don’t impact anything else. Instructions for installing a local Atom can be found under the Atom setup topic in our documentation.  While I am only demoing an Atom, the ideas I cover in this blog can be applied to Molecules and Clouds as well.
  • Configuration FilesAll the configuration files referenced in this blog (filebeat.yml, docker-compose.yml, etc) are available on Bitbucket.  


What is the Elastic Stack?

Elastic Stack (formerly known as ELK) "is the most popular open source logging platform."  It is made up of four main components that work together to ship, transform, index and visualize logs.  The four components are: 




A lightweight agent that ships data from the server where it is running to Logstash or Elasticsearch.


A data processing pipeline that ingests data from Beats (and other sources), transforms it and sends it along to Elasticsearch.


A distributed, RESTful search and analytics engine that centrally stores all your data.


A data visualization tool that allows you to slice and dice your Elasticsearch data (i.e. your log data).


All four components were designed with scale in mind. This means you can start out small, like we will in this blog, and scale them out later to meet the demands of your architecture.


What is the Container Log?

Now that we’ve gone over what the Elastic Stack is, let’s take a look at the log that we are going to process. If you aren’t already familiar with the container log, I encourage you to read Understanding the Container Log.  For our purpose, the most important thing to understand is the format of the log. This information will be needed when we configure the Filebeat prospector and the Logstash pipeline in the next sections. As the “Log Structure and Columns” section explains, each log message is composed of five fields:


The best way to understand the structure is to look at some example log messages. Here are two from my container log: 

May 31, 2018 2:15:22 AM UTC INFO [com.boomi.container.core.AccountManager updateStatus] Account manager status is now STARTED


May 31, 2018 2:25:08 AM UTC SEVERE [com.boomi.process.ProcessExecution handleProcessFailure] Unexpected error executing process: java.lang.RuntimeException: There was an error parsing the properties of the Decision task. Please review the task ensuring the proper fields are filled out.
java.lang.RuntimeException: There was an error parsing the properties of the Decision task. Please review the task ensuring the proper fields are filled out.
        at com.boomi.process.util.PropertyExtractor.resolveProfileParams(
        at com.boomi.process.util.PropertyExtractor.initGetParams(
        at com.boomi.process.shape.DecisionShape.execute(
        at com.boomi.process.graph.ProcessShape.executeShape(
        at com.boomi.process.graph.ProcessGraph.executeShape(
        at com.boomi.process.graph.ProcessGraph.executeNextShapes(
        at com.boomi.process.graph.ProcessGraph.execute(
        at java.util.concurrent.ThreadPoolExecutor.runWorker(
        at java.util.concurrent.ThreadPoolExecutor$


The first log message is a simple single-line log message. The second log message is an example of a multi-line message that includes a stack trace in the MESSAGE field. Another thing that might not be obvious just by looking at the log is that the TIMESTAMP and LOG LEVEL fields are dependent on your time zone and locale. This means that if you have multiple Dell Boomi Atoms running in different locations, you might need to have different Logstash and Filebeat configurations (or at least more complicated grok patterns than I show later). As mentioned earlier, my Atom was configured to log using UTC and en_US.


Setting up Elasticsearch, Kibana and Logstash via Docker

Before we can start sending logs via Filebeat, we need to have somewhere to send them to. This means installing Elasticsearch, Kibana and Logstash.  Since I was just exploring Elastic Stack, I decided to install all three products on the same physical server (but separate from the server where my Atom was running) using Docker Compose. Running them all on a single server wouldn't be a good idea in a production environment, but it allowed me to get up and running fast.

  1. Clone the elastic-stack-demo repository and checkout the 'part1' tag.

    $ git clone

    Cloning into 'elastic-stack-demo'...

    $ cd elastic-stack-demo

    $ git checkout part1

  2. Start up the Elastic Stack using docker-compose.

    $ docker-compose up

    Creating elasticsearch ... done
    Creating kibana ... done
    Creating logstash ... done
    Attaching to elasticsearch, kibana, logstash
    elasticsearch | [2018-07-03T04:15:14,294][INFO ][o.e.n.Node ] [] initializing ...

  3. That's it. Once it all starts up, point your browser at http://<your_server>:5601 and see that it brings up Kibana. 

At this point, you have a Logstash pipeline running that is ready to consume and process container log messages.  To understand the pipeline a bit more, let’s take a look at the pipeline configuration file.

input {
     beats {
          port => 5044

filter {
  if [sourcetype] == "container" {
    grok {
        match => { "message" => "(?<log_timestamp>%{MONTH} %{MONTHDAY}, %{YEAR} %{TIME} (?:AM|PM)) %{WORD} (?<log_level>(FINEST|FINER|FINE|INFO|WARNING|SEVERE))%{SPACE}\[%{JAVACLASS:class} %{WORD:method}\] %{GREEDYDATA:log_message}" }
    date {
        match => [ "log_timestamp", "MMM dd, yyyy KK:mm:ss a" ]
        timezone => "UTC"
        remove_field => [ "log_timestamp" ]

output {
     elasticsearch {
          hosts => [ "elasticsearch:9200" ]

The input and output stages of the pipeline are pretty standard.  The pipeline is configured to receive log messages from the Elastic Beats framework and ultimately send them to Elasticsearch.  The interesting part of the pipeline is the filter stage. The filter stage is using the grok filter to parse container log messages into fields so that the information can be easily queried from Kibana.  It is also using the date filter to parse the timestamp from the log message and use it as the logstash timestamp for the event. This way the timestamp in Kibana will be the timestamp of the log message, not the timestamp of when the message was processed by the pipeline.


As a reminder, log message content is dependent on the Atom time zone and locale so the grok and date filters shown in this configuration might need to be tweaked for your time zone and locale.


Setting up Filebeat

Now that the Logstash pipeline is up and running, we can set up Filebeat to send log messages to it.  Filebeat should be installed on the same server as your Atom. There are multiple ways to install Filebeat. I chose to install it using Docker.  My install steps below reference a few variables that you will need to replace with your information:


YOUR_ES_HOST - The hostname or IP address where you just installed Elasticsearch.

YOUR_ATOM_HOME - The directory where your Atom is installed.

YOUR_CONTAINER_NAME - The name you gave your Atom.  This will be queryable in Kibana. 

YOUR_CONTAINER_ID - The unique ID of your Atom (aka Atom ID).

YOUR_LOGSTASH_HOST - The hostname or IP address where you just installed Logstash (in this example, it is the same as YOUR_ES_HOST).


Once you've collected that information, you can install and configure Filebeat on the Atom sever by following these steps:

  1. Start your Atom if it isn't running already.

    <YOUR_ATOM_HOME>/bin/atom start

  2. Pull down the Filebeat Docker image.

    $ docker pull

  3. Manually load the Filebeat index template into Elasticsearch (as per the Filebeat documentation).

    $ docker run setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["<YOUR_ES_HOST>:9200"]'

  4. Clone the elastic-stack-demo repository and checkout the 'part1' tag.

    $ git clone

    Cloning into 'elastic-stack-demo'...

    $ cd elastic-stack-demo

    git checkout part1

  5. Update the ownership and permissions of the Filebeat configuration file (see Config file ownership and permissions for more information on why this is needed).
    $ chmod g-w filebeat.yml 
    $ sudo chown root filebeat.yml
  6. Start the Filebeat Docker container.

    $ docker run -d -v "$(pwd)"/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro -v <YOUR_ATOM_HOME>/logs:/var/log/boomi:ro -e CONTAINER_NAME='<YOUR_CONTAINER_NAME>' -e CONTAINER_ID='<YOUR_CONTAINER_ID>'  -e LOGSTASH_HOSTS='<YOUR_LOGSTASH_HOST>:5044' --name filebeat

Once Filebeat starts up, it will use the prospector defined in filebeat.yml to locate, process and ship container log messages to the Logstash pipeline we set up earlier.  Let's quickly review how the prospector is configured.

#=========================== Filebeat prospectors =============================
- type: log
enabled: true
- /var/log/boomi/*.container.log
multiline.pattern: '^[A-Za-z]{3}\s[0-9]{1,2},\s[0-9]{4}'
multiline.negate: true
multiline.match: after
sourcetype: container
atomname: '${CONTAINER_NAME}'
containerid: '${CONTAINER_ID}'
fields_under_root: true

As you can see, the prospector is configured to:

  • Read all container log files that are present in the /var/log/boomi directory (which is mounted in the Filebeat container that points to your logs directory)
  • Handle multi-line messages so that log messages with stack traces are parsed correctly.  Note, you may need to adjust the multi-line pattern shown here if your Atom is using a different locale.
  • Add additional informational fields (sourcetype, atomname and containerid) to the output that is sent to Logstash.  These fields will end up as queryable fields in Kibana.   


Query the Container Logs

The last step you need to do before you can explore your log messages, is tell Kibana which index(es) you want to search.  This is done by creating an index pattern in Kibana.  

  1. Open up Kibana (http://<your_server>:5601).
  2. Click on Management.
  3. Click on Index Patterns.
  4. Enter ‘logstash-*’ as the Index pattern and click Next step.
  5. Select '@timestamp' as the Time Filter field name and click Create index pattern.
  6. Once created, you can explore the fields that are available in the new index.


It is finally time to test our setup end to end. Lets generate some log messages and search for them in Kibana.

  1. Stop your Atom. This will generate some log messages.

  2. On the Discover tab in Kibana, run a search for "Atom is stopping."
  3. Click on View single document to see all the fields that the Elastic Stack is tracking.


Isn't that much easier than searching the container log directly?  Searching for keywords is just the tip of the iceberg, I encourage you to explore the Kibana User Guide to see all the ways that Kibana can help you search, view and interact with the container log.


Are you using Elastic Stack (or another product) to centralize your Atom logs? If so, I'd love to hear about it.


Jeff Plater is a Software Architect at Dell Boomi.  He enjoys everything search, but not before his morning coffee.

I wanted to share a solution I recently developed to work with SWIFT messages in the Dell Boomi AtomSphere Platform. If you're not familiar, SWIFT messages are commonly used in the financial industry to exchange data between institutions. SWIFT messages follow a specific data format that does not align with any of the Profile types within AtomSphere so I developed Swift2XML, a Java library that converts SWIFT messages to/from XML so you can work within the data in Maps and other shapes. The library is generic and supports all SWIFT Message Types.



Getting Started


Get and Upload Libraries to AtomSphere

Please download the following libraries:


Then upload them to your AtomSphere account via Setup > Development Resources > Account Libraries.


Create Custom Library Component

Once uploaded, you can create a Custom Library component and include the three libraries added above:



Once created, don't forget to deploy the Custom Library to the Molecule/Atom where you will like to test the framework.


Create the XML Profile

Create a one XML Profile based on MT103.xml (in the project) or based on generated XML using Swift2XML (open the JUnit class and create a new testcase).



Use of Swift2XML in a Process

In your Boomi process, add a Data Process shape with a Custom Scripting step. Replace the default script with the following:


To convert from SWIFT to XML:

import java.util.Properties;
import com.boomi.proserv.swift.SwiftToXML;

for( int i = 0; i < dataContext.getDataCount(); i++ ) {
  InputStream is = dataContext.getStream(i);
  Properties props = dataContext.getProperties(i);

  String swift = SwiftToXML.inputStreamToString(is);
  String xml   = SwiftToXML.parseSwift(swift);
  is = SwiftToXML.stringToInputStream(xml);

  dataContext.storeStream(is, props);


And to convert from XML to back to SWIFT:

import java.util.Properties;
import com.boomi.proserv.swift.SwiftToXML;

for( int i = 0; i < dataContext.getDataCount(); i++ ) {
  InputStream is = dataContext.getStream(i);
  Properties props = dataContext.getProperties(i);

  String xml   = SwiftToXML.inputStreamToString(is);
  String swift = SwiftToXML.parseXML(xml);
  is = SwiftToXML.stringToInputStream(swift);

  dataContext.storeStream(is, props);



Example of a process Process

The following process read different SWIFT message types from the Disk, get the Message with type 103 and remove the 71A tag.



Decision using the message type


Removing a tag


Execution view


Execution view: Input SWIFT Message (type 103)


Please note the presence of the last element in block 4:


Execution view: XML Message generated by Swift2XML


Execution view: SWIFT Message generated by Swift2XML after removing the tag


For additional information, please visit my Github project: swift2xml. If you need to handle SWIFT, give it a try and let me know what you think. Thanks!


Anthony Rabiaza is a Lead Solutions Architect with Dell Boomi.

In this post I wanted to share my experience and a "How To" for building a Boomi process and triggering it whenever a user enters or exits a geographical area. This was the first Boomi use case I thought of when joining team. From a Boomi point of view it isn't complex at all. Nevertheless I've shared relatively step-by-step instructions.




Not so long ago I decided that I wanted more control of my heating bills. I invested in a wireless thermostat. It was great, looked cool on the wall and had some interesting features. I was excited when a new feature was introduced which allowed me to control the thermostats state purely based upon my mobile phones location. I tried it and I 'think' it worked. I wasn't totally sure what it's criteria was for adjusting the heating, turning it up or down and on and off. What are the thresholds for this? To get full control and learn about geofencing I decided that I'd like to see what's involved in building a similar system. Why not build it with Boomi !



There are a few elements required for this process:


  • Location Platform such as, etc. – I’ve chosen for this example as it offers a free plan to get started
  • NEST thermostat developer account – you can register here
  • Mobile Phone either Android or IOS
  • Boomi Connectors required
    • Boomi Web Services Server connector – available with Professional Plus, Enterprise, and Enterprise Plus Editions
    • HTTP Client Connector


Creating your Callback URI

We'll need to build our Callback URI which will allow us to authenticate our HTTP Client connector with our NEST OAuth Client. This is unique to each user account in Atomsphere. Login to Atomsphere and navigate to your username at the top towards the right and select "Setup" from the drop down menu. Take note of your "Account ID" from the "Account Information" section.


Append this to the the callback URl format as follows:<ACCOUNT_ID>/oauth2/callback



So in this case it would be:


Building a NEST

We'll need two things to configure the NEST side of the equation. An OAuth Client to authenticate our Boomi process with and a NEST thermostat. You can either use a live NEST that you own or the simulator that they provide.


Creating an OAuth Client

Now Go to and register/sign in to gain access. Once in we'll create a new OAuth Client which is effectively allowing us to authenticate our Boomi process with the NEST environment and provide permission to change the temperature. Then we'll download the NEST Simulator Application and use that in place of a real NEST. It's exactly the same API but allows you to test all works well and also use this example if you don't actually have a NEST.


Once logged in hit "Create New OAuth Client" and fill in all the mandatory fields. Paste in the Boomi Callback URI we created earlier into the Default OAuth Redirect URI field. Then go to permissions and select Thermostat. I chose to get read and write permissions as I'll need read permissions later when I build the NEST JSON profile. You'll need to type in a description into the permissions dialog which will be displayed to users upon authenticating.


Setup Nest OAuth Client


Once created take note of the OAuth section in which you'll find the:


  • Client ID
  • Client Secret
  • Authorization URL


We'll use these when we're configuring our HTTP Client connector.


Setting up the NEST Home Simulator Application

In reality you'd want to set this up to work with a real thermostat. If you have one that's fine. If not then the NEST Home Simulator will suffice. It's a Chrome plugin available here. Once it's download you should login with the same credentials as those for your NEST developer account. Then create a new thermostat and you're all set.

If you have a NEST thermostat then by all means use that instead of the simulator. Keep in mind that the NEST API has certain rules around the state that your thermostat is in when away from home i.e. ECO mode. I encourage you to have a read at the Documentation for the API here.



Building it out in Boomi

Now we have a virtual NEST thermostat setup and the OAuth Client created we can create a process in Boomi which will interact with it. The Boomi process will only have three shapes. A Web Services Service Connector at the start to listen for data coming from, a decision shape to route the process and then an HTTP Client connector for changing the temperature on the NEST. First of all however we'll create a test process to perform a GET request so we can built our a JSON profile.


Configuring the HTTP Client connector

Boomi doesn't have a predefined Application Connector for NEST at present so we will build one out of the HTTP Client Connector.


Start a new process and drag an HTTP Client connector onto the canvas. Create a new connection and insert the Client ID, Client Secret and Authorization URL that we recorded whilst creating our NEST OAuth Client previously. Input the Authorization URL from the NEST OAuth Client into the Authorization Token URL and the client secret and ID into the corresponding fields. Use the below table if you need to.


HTTP Client Connector Field

Authentication TypeOAuth 2.0
Client ID<Client ID from the NEST OAuth Client App>
Client Secret<Client Secret from the NEST OAuth Client App>
Authorization Token URL<Authorization URL from the NEST OAuth Client App>
Access Token URL 


We don't need to set any fields in the SSL Options or Advanced Tabs. Hit "Generate" and we will be taken to an authorization screen for our NEST OAuth Client app and allow our HTTP Client connector to generate a token for future use. You should see a screen similar to the below. Hit allow and we'll be redirected to a Boomi screen stating "Authorization Code Received". Close this window and you'll see on your Atomsphere screen that the "Access Token generation successful" message is displayed.


Now create a new Operation. No configuration is required here other than ticking the "Follow Redirects" check box. Once saved we can now make calls to our NEST OAuth Client app.


Blog NEST ConnectorOAuth 2.0 Authentication


Importing the NEST JSON Profile

Lets test this simple process and get back an outline of the content that the NEST API provides. Kick off a Test Mode run and select either a local atom or the atom cloud. You should receive JSON content back relating to your NEST Account. Viewing the Connection Data will display the JSON response. Something similar to the below. I've removed some of the structure. You'll receive more data than this in reality.


Note in the layout of this JSON that the id of my NEST Thermostat is "3hL-EH_gBl3APR7XB5JR9-qoPde-hzuH".

  "devices": {
    "thermostats": {
      "3hL-EH_gBl3APR7XB5JR9-qoPde-hzuH": {
        "humidity": 50,
        "locale": "en-GB",
        "temperature_scale": "C",
        "is_using_emergency_heat": false,
        "has_fan": true,
        "software_version": "5.6.1",
        "has_leaf": true,
        "where_id": "FydB4fkOWmyhzdeEDpGYOg7udugr_YTnOf3fhm3sJoGHw-q_8AY96A",
        "device_id": "3hL-EH_gBl3APR7XB5JR9-qoPde-hzuH",
        "name": "Guest House (2D5E)",
        "can_heat": true,
        "can_cool": true,
        "target_temperature_c": 13.5,
        "target_temperature_f": 56,
        "target_temperature_high_c": 26,
        "target_temperature_high_f": 79,
        "target_temperature_low_c": 19,
        "target_temperature_low_f": 66,
        "ambient_temperature_c": 5.5,
        "ambient_temperature_f": 42,
        "away_temperature_high_c": 24,
        "away_temperature_high_f": 76,
        "away_temperature_low_c": 12.5,
        "away_temperature_low_f": 55,
        "eco_temperature_high_c": 24,
        "eco_temperature_high_f": 76,
        "eco_temperature_low_c": 12.5,
        "eco_temperature_low_f": 55,
        "is_locked": false,
        "locked_temp_min_c": 20,
        "locked_temp_min_f": 68,
        "locked_temp_max_c": 22,
        "locked_temp_max_f": 72,
        "sunlight_correction_active": false,
        "sunlight_correction_enabled": true,
        "structure_id": "WoPouIo-IsQTuGgTd1HfbVtJ1y7XErHcj2hebmTEMyLPtovAH0PTpQ",
        "fan_timer_active": false,
        "fan_timer_timeout": "1970-01-01T00:00:00.000Z",
        "fan_timer_duration": 15,
        "previous_hvac_mode": "",
        "hvac_mode": "heat",
        "time_to_target": "~0",
        "time_to_target_training": "ready",
        "where_name": "Guest House",
        "label": "2D5E",
        "name_long": "Guest House Thermostat (2D5E)",
        "is_online": true,
        "hvac_state": "off"
  "structures": {
  "metadata": {

The above is just an example. Copy the actual output from your test mode run and save that to a text file.


Now we can create a new profile in Atomsphere and select the JSON Profile Format. Select Import and browse to the file we just created and upload it. This will build out our JSON profile which will be specific to that of our virtual NEST. We're now ready to interact with the NEST API.




Importing the Webhook JSON Profile

We'll need to create the profile of the data that will be received from We could configure and do some test runs to receive this data however to speed things up I've pasted their test run output below. Follow a similar process as above and build out a JSON profile for the Webhook.

  "event": {
    "_id": "56db1f4613012711002229f6",
    "createdAt": "2018-04-07T12:49:21.890Z",
    "live": false,
    "type": "user.entered_geofence",
    "user": {
      "_id": "56db1f4613012711002229f4",
      "userId": "1",
      "description": "Jerry Seinfeld",
      "metadata": {
        "session": "123"
    "geofence": {
      "_id": "56db1f4613012711002229f5",
      "tag": "venue",
      "externalId": "2",
      "description": "Monk's Café",
      "metadata": {
        "category": "restaurant"
    "location": {
      "type": "Point",
      "coordinates": [
    "locationAccuracy": 5,
    "confidence": 3

This is also just an example but it's fine for the Webhook profile.



Building the main process

Let's start a new process and configure the start shape as a "Web Services Server" then create a new operation.  These settings are important to make sure the process is executed correctly. The "Operation Type" and "Object" fields will be appended to the URL of our Atom where this process will be deployed to. The "Expected Input Type" should be "Single Data". We aren't sending back any data because there won't be anyone listening to receive it so we can leave the "Response Output type" as None.


Geofence Operation


Now drag in a decision shape and choose the Webhook profile we created earlier for the First Value field. Next choose the element that we want to make a decision based upon. Drill down through the Element field to the Event object and select "Type".


This object will either contain "user.entered_geofence" or "user.exited_geofence". I configured the rest of the shape so that the comparison was "Equal To" with the second value configured as a static value of "user.entered_geofence". Therefore whenever a user enters the geofence the decision rule is set to true and proceeds down the True path. Otherwise it will proceed down the false path.


Now let's add in an HTTP Connector to the True path. Leave the action as a Get. Choose our previously configured Virtual NEST Connector for the connection and then create a new Operation. Configure it as in the below image using the NEST JSON profile we created in the Configuring the HTTP Connector step.



Make sure to choose PUT for the HTTP Method here and also check the Follow Redirects check box. This is because the NEST API ingests data using the PUT method only and upon making a request to the API it will issue a 307 redirect which the process will need to follow.


Now select parameters and add a new parameter. Selecting the Input will show the objects for our NEST JSON profile. Drill down through it to your NEST simulators device and select a target temperature object. I chose "target_temperature_c" object and gave it a static value of 25. Probably a little too hot in reality...


Make a copy of this Connector to the False path and change the temperature parameter to a lower value such as 10.


Nest Temperature parameter

If all has gone well you should have a process that looks something like this. Remember it's always best practice to put a stop at the end of each route.


Geofence NEST Control Process


Deploying to a local Atom

We would typically want authentication configured on our Atom Shared Web Server however that won't be possible for this example. It isn't possible to configure to pass  our Atom credentials along with the Webhook data. As a result we need to configure it as a basic API Type with the Authentication Type set to "None". This also means that this process must live on a locally deployed Atom because the Atom cloud requires user Authentication when using the Shared Web Server.

*IMPORTANT* This isn't secure so keep this in mind. We could use the Security Token generated by the Webhook to make this more robust however I haven't implemented it here. Perhaps someone could offer up an example of this in the comments?


Navigate to the Manage tab in Atomsphere and select Atom Management. Choose the atom you are going to deploy this process to and then select Shared Web Server from the Settings and Configuration section. Configure your atom similar to the below. The Base URL will need to relate to your specific Atoms web addressable URL and listening port. Don't override unless necessary. If you need in depth help around this then have a look in the community. There's a lot of help there.


Shared Web Server

Take note of your Base URL here which we'll append with our Simple URL path from our Web Services Server Operation later.


Finally navigate to the Deploy tab and Deploy the process to your Atom.


Setting up

Registering for a account is very straightforward and the free edition provides enough functionality for us to achieve this integration.

Go to and fill in the details. Confirm your email to validate your account and login. Once logged in we need to create a Geofence to monitor and a Webhook to send data to when our user crosses the threshold of our geofence. We also need a mobile app to create a user and update with their location.  An SDK is available for developing your own mobile app. For this example I'm going to use the test app they provide.


Creating a Geofence

Login into and search for your location. Addresses and place names are supported. Once you've found your desired location it's just a case of setting the threshold of our geofence. We would likely want to activate the heating system when we are close but not too close.

I've used a 1km radius which fits well with me walking around the city. Now hit Create and your Geofence is set to go. I've used our office as an example.


Create new Geofence



Setting up a Webhook

The Webhook is configured in the Integrations section of the dashboard. Here we will input the URL of our Web Services Server.

Choose the same environment in which your Geofence exists (Test or Live) and select "Single Event". In the URL we want to input our Web Services Server URL from the Boomi process we created earlier and append it with the Simple URL path from the Web Services Service operation.


In my example the completed URL would be as below. Base URL from your Web Services Server and Simple URL from your Web Services Service Operation. Integration

At this stage we can hit Test and will send a sample JSON message to our process. You should see the temperature on your NEST simulator app change. If it doesn't then there's a problem. It's a good idea to troubleshoot at this stage to make sure that the Atom and process are configured correctly. We can also use Postman to test as I have in the video below.



Installing and configuring the test mobile app

As mentioned at the beginning there is an SDK available for developing your own app however the test app is really simple and will work just fine.

It is available for either IOS or Android. Once installed we just need to install the relevant publishable key.

These keys are available at 


My Geofence and Webhook resides in the Test environment of so I used the Test publishable key. It's worthwhile copying this to an email and sending it to yourself. It's lengthy. Copy it into the app on your mobile device and enable Tracking. If you look in you'll now appear with a user ID and device ID. You're being tracked! Now whenever this device crosses the threshold of your Geofence it will fire an event at our Boomi process which will then interact with our NEST thermostat and change the temperature.


Final thoughts

This is a basic use case using some fairly powerful Boomi shapes and connectors. There is much more we could do here to make this more interesting such as adding additional actions and connecting to multiple NEST devices at the same time. We could trigger lighting, music, all sorts of IoT devices that exist in the home, office or workplace.


In a more commercial use case we could use a similar method for monitoring deliveries, trucking, ships, you name it. Boomi processes can be triggered like this in many different ways to provide services and solutions that can streamline all kinds of scenarios out there.


Looking at this simple example really opened up my mind to the huge number of use cases there are when it comes to IoT and Geofences. The variety of ways in which we could utilize the data in these devices and how we could take action based upon their location.


I hope you've found this of interest and would love to hear your thoughts, suggestions and the use cases you have found which could be solved using Geofences and IoT devices and of course Boomi.

In our next Community Champion spotlight, we talk with Harikrishna Bonala.  Contributors like Hari make the Boomi Community a vital resource for our customers to get the most from our low-code, cloud-native platform.


How does Boomi address those common customer concerns?


Hari:  From a management perspective, Boomi has a very good user interface — very clean and neat — and we can easily show customers how simple it is to access their integrations. And, of course, Boomi is a true native-cloud, single-instance/multi-tenant platform. Plus, Boomi offers hundreds of application connectors to make integration plug-and-play.



Read the full interview here:  Learning Through Experience: Q&A With Boomi Community Champion Hari Bonala.


Look for more interviews with Community Champions coming soon!


Avro objects

Posted by teemu_hyle Employee Jun 4, 2018

This article describes how to create Apache Avro™ object from JSON files.
For this example there is used Apache Avro™ version 1.8.2


The use case behind this is to send Avro objects to message systems like Kafka.
This example creates Avro objects from JSON files and send Avro objects via a connector.
The same logic could be implemented in custom connectors with Dell Boomi connector SDK.


Download Avro tools

Download avro-tools-1.8.2.jar from


Create Custom Library

Upload avro-tools-1.8.2.jar via via the Account Libraries tab on the account menu's Setup page.
Create Custom Library with avro-tools-1.8.2.jar and deploy it to your Atom.

Define Avro schema

There is used following schema example from
JSON profile can be created to Boomi for this schema, and Avro schema could be validated during runtime but it is not necessary.


    "namespace": "example.avro",
    "type": "record",
    "name": "User",
    "fields": [{
        "name": "name",
        "type": "string"
        }, {
            "name": "favorite_number",
            "type": ["int", "null"]
        }, {
            "name": "favorite_color",
            "type": ["string", "null"]

Input JSON files

For this example there is used two JSON files that are placed in SFTP Server.


{"name": "Alyssa", "favorite_number": {"int": 256}, "favorite_color": null}


{"name": "Ben", "favorite_number": {"int": 7}, "favorite_color": {"string": "red"}}

Create Process

There is used following process to read input JSON files and Avro schema. Avro schema is stored in to Dynamic Process Property.
Data Process step and Custom Scripting is used to create Avro object from JSON.
One Avro object is created for multiple JSON files.
In this example the Avro object is sent to SFTP Server, but it could be sent to message system like Kafka.

Data Process / Custom Scripting (Avro)

import java.util.Properties;
import com.boomi.execution.ExecutionUtil;
import org.apache.avro.Schema;
import DatumReader;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericDatumReader;
import DatumWriter;
import org.apache.avro.file.DataFileWriter;
import org.apache.avro.generic.GenericDatumWriter;

// Avro schema
String schemaStr = ExecutionUtil.getDynamicProcessProperty("AVRO_SCHEMA");
Schema.Parser schemaParser = new Schema.Parser();
Schema schema = schemaParser.parse(schemaStr);

// Avro writer
DatumWriter<GenericRecord> datumWriter = new GenericDatumWriter<GenericRecord>(schema);
DataFileWriter<GenericRecord> dataFileWriter = new DataFileWriter<GenericRecord>(datumWriter);
ByteArrayOutputStream baos=new ByteArrayOutputStream();
dataFileWriter.create(schema, baos);

for( int i = 0; i < dataContext.getDataCount(); i++ ) {
    InputStream is = dataContext.getStream(i);
    Properties props = dataContext.getProperties(i);

    // Input JSON document
    Scanner s = new Scanner(is).useDelimiter("\\A");
    String jsonString = s.hasNext() ? : "";

    // JSON to Avro record
    DecoderFactory decoderFactory = new DecoderFactory();
    Decoder decoder = decoderFactory.jsonDecoder(schema, jsonString);
    DatumReader<GenericData.Record> reader = new GenericDatumReader<>(schema);
    GenericRecord genericRecord =, decoder);



// Output document (Avro)
dataContext.storeStream(new ByteArrayInputStream(baos.toByteArray()), new Properties());

Test run and validation

The process is tested and Avro object is validated using "Deserializer".




Below is a code for "Deserializer" to validate Avro object.

import org.apache.avro.file.DataFileReader;
import org.apache.avro.file.DataFileWriter;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.generic.GenericDatumReader;
import org.apache.avro.Schema;

class Deserialize {
    public static void main(String args []) {
        try {
                  Schema schema = new Schema.Parser().parse(new File(args[0]));
                  File file = new File(args[1]);

                  System.out.println("Avro Deserializer");
                  DatumReader<GenericRecord> datumReader = new GenericDatumReader<GenericRecord>(schema);
                  DataFileReader<GenericRecord> dataFileReader = new DataFileReader<GenericRecord>(file, datumReader);
                  GenericRecord record= null;
                  while (dataFileReader.hasNext()) {
                     record =;
        } catch (Exception e) {

With Dell Boomi Master Data Hub (Hub) you can implement different MDM architectural styles like Consolidation, Centralized, Coexistence and Registry.


Consolidation style

Consolidation is a typical starting point to implement MDM solution. In Consolidation there are systems that contribute the data to the MDM repository. These systems can be either on-premise application like Oracle Database or SAP R/3, or cloud based application like or ServiceNow.




Concepts: Source

In MDM implementations there are systems that are data providers or data consumers. In Boomi these systems are called as source.
When a source is attached to a data model it is configured to be a data provider and/or data consumer. A source that is a data provider contributes data and a source that is a data consumer accept channel updates.


Data Model Sources

After configuring and deploying a data model to a repository you attach source systems to the model.
Below is an example where three systems is attached to the Customer model that is in repository called Hub Repository.



Integration implementation

For each system that contribute data to the model there is created an integration process.
Below is an example that send SAP Customer (DEBMAS) records to the Hub.


Integration process in Boomi:

Customer upsert operation to the Hub from SAP :

Data synchronization

Below is an example golden record. The record is from SAP and SAP ID (KUNNR) is shown as Entity ID.

The record is not yet consolidated or distributed with other systems (MySQL and and



Registry style

In registry style implementation the master data exists in source systems.
Boomi Hub match functionality can be used in registry style implementation to identify duplicate records.
Source system IDs, fields that are used for matching and potentially reference/link to the source data are stored in Boomi Hub.

Today, Dell Boomi is announcing the Spring 2018 release of the Boomi platform. This release showcases more than 100 new and enhanced capabilities made generally available to Boomi customers in the first half of this year, including more than 20 features that were requested by Boomi customers (Thank you!).


Spring 2018 Release: Overview

This release includes new and enhanced features to every element of the Boomi platform. Collectively, these features help further simplify Boomi’s user experience, drive developer productivity and facilitate building low code workflow applications. It also showcases Boomi’s support for IoT Integration.


Boomi for IoT

With this release, Boomi unveils the general availability of its IoT solution. This solution has been available to Boomi customers for the last year, and provides the ability to automate business processes and deliver an engaging experience via web, mobile, or any other interfacing technology to achieve tangible business outcomes using device data. 


Features include:

  • Edge device support - IoT integration patterns require seamless integration of device data, application data and people. With the patented Boomi Atom, customers can perform integrations, and host and manage web services, anywhere; on-premise, in clouds, or edge devices.
  • IoT Protocol support - Boomi provides connectivity to a vast array of cloud and on-premise applications, as well as devices and device data through support for a variety of IoT connectivity protocols, including open standards such as AMQP, MQTT and REST.
  • Workflow Automation for IoT - Organizations can use Boomi Flow to create business processes that respond to certain alerts or triggers as appropriate and allow for human intervention or decisions.
  • Boomi Environment Management  IoT deployments may encompass multiple edge gateways performing common application integration functions to support a multitude of devices across a large landscape. With Boomi Environment Management, users can centrally manage application integration processes, and automatically synchronize changes across all Boomi Atoms, and associated gateways that belong to an environment.


 Integration Accelerators

Since its inception, Boomi has focused on providing ease of use and tools for accelerating building and managing integration projects. This release includes UI enhancements, simplified reporting to empower data governance specialists, new and improved connectors and connector SDK, as well as the ability for Boomi users to showcase their work and contribute to Process Libraries.

To learn more, please watch the video below:


Developer Productivity

To drive greater developer productivity, Boomi has introduced new features to automate and streamline support for enterprise-scale use cases, thereby reducing the complexity associated with IT operations. Highlights include Packaged Deployment to simplify management of user developed integration artifacts, new authentication capabilities for API management, support for importing Swagger definition or WSDL from publicly facing URL as well as enhanced support for SharePoint in Boomi Flow.

To learn more, please watch the video below:



To learn more about the Partner Developed connectors, please visit the Boomi Blog.

You can also read the Press Release here. 

A year and a half ago, by popular request, we launched our Process Library built into the Boomi platform to provide examples, templates and ‘how-tos’ to help you create your integration solutions faster. This was in response to your ask “just show me an example!”


The Process Library proved valuable to you all, and since then we have been thinking about how to help our ecosystem of customers, OEM, channel partners and system integrators showcase the templates and examples you have built.

Today we are excited to announce the ability for you -- members of the Boomi Community -- to publish and share your examples and templates across the Boomi network.


If you are a Boomi expert, an OEM partner or an Systems Integrator partner, and you are looking to share your Boomi asset to create more visibility for the work you have done, you have come to the right place...Community Share.


The Community Share allows you to:

  • Share how-to examples and templates
  • Give the community visibility to your expertise
  • Provide access to complex processes partners may have built
  • Search across all assets, whether provided by Dell Boomi, a partner, or other Boomi experts


The Community Share Mission

These improvements are part of our broader vision to make Boomi how-tos and examples more complete, and have those available to share within the community.


1. Power of Boomi Ecosystem

Recognizing the amazing champions and expertise within the larger Boomi ecosystem, we want to provide a place for you to showcase and share your diverse expertise within the Boomi Community.


2. Catered to You

It’s directly providing the insight to your customer or stakeholder when you are adding
to what Boomi provides. You can now feature your work directly inside the Boomi platform. You can also leverage additional examples contributed by the community and increase time to value for implementing your ideas.


3. Simplicity

In few clicks, you can easily provide access to your work, or find answers, templates and samples from experts. We are one community and Community Share provides a common platform for sharing integration assets.


We believe unlocking the community fuels more innovation and drives ideas on what is possible with the power of you!


Visit Community Share to see what’s available and instructions for how to share your examples.

Thameem Khan

Boomi Enabled Slack Bot

Posted by Thameem Khan Employee May 9, 2018

It's been some time since my last blog. I have been slacking all the while and I wanted to share my experience of building a slack bot. As enterprises start looking at more and more automation, BOTs will play a critical role. BOT platforms provide a sophisticated speech recognition and natural language understanding, which enables efficient UI/UX. But, these BOTs need to interact with other applications to serve data to the end user. This is where iPaaS (Boomi) plays a key role. iPaaS enables BOTs to interact with applications where the data actually resides.


The below video will walk you through an example BOT's functionality, architecture and how Boomi adds value. Hope you find it interesting and come up with more interesting BOTs. Please feel free to reach out to me and as always, comments/suggestions are welcome.




Amazon Lex – Build Conversation Bots 

AWS Lambda – Serverless Compute - Amazon Web Services 

Integrating an Amazon Lex Bot with Slack - Amazon Lex 

Join AWS for Live Streaming on Twitch 


Thameem Khan is principal solutions architect at Dell Boomi and has been instrumental in providing thought leadership and competitive positioning Dell Boomi. He is good at two things: manipulating data and connecting clouds.

I was recently given a challenge to integrate Banner (LINK) data and transform it to an EDI file, specifically a 4010 - 130 Student Educational Record / Transcript (LINK).  Upon researching the Banner structure provided, I figured it would be a good community post for handling complex flat files which contain positional records and have cross record relationships with looping contained therein.


There are many different ways to handle this so I'm providing what I found to be most logical.  I look forward to your feedback on approaches you may have taken in the past to accomplish similar scenarios and I'll try my best to reply to comments or questions as they are posted.


Lets get started:


Many Higher Education organizations use Ellucian Banner for their Student Information System.
Though the sample implementation is Ellucian Banner specific, the same concepts for the Flat File profile may be applied to any positional file which needs to maintain relationships between records - think 'hierarchy'.  Also, though the end target system is EDI with specific segments, like everything in Boomi, the target destination can easily be swapped out for other systems (JSON, XML, DB etc.).



Reference Guide Articles

Here are some links to our Reference Guide, which you may find useful when using and configuring this scenario.

  • Multiple Record Formats - Profile (LINK)
  • Relationships between flat file record formats (LINK)
  • FF Profile Elements (LINK)
  • EDI Profile Identifier Instances (LINK)


Scenarios on How to Use Flat File to EDI

Scenario 1 - How do I configure a Flat File Profile which is positional AND contains hierarchies?

Below is a screen shot of of a sample document attached which was provided by Banner.

Items of note (from top to bottom):

  • BLUE box: a repeating element with the first two characters "RP" as the identifier when OUTSIDE of the S1 hierarchy
  • RED box: Starts with the S1 identifier and continues until the next S1 identifier
    • YELLOW highlight box: contains information related to S1 parent.  This is a mix of individual lines (S2 & SUN) as well as sub-hierarchies (C1, C2 & RP)
    • LIGHT BLUE box: individual courses taken with additional information contained in the S2 line and optional RP line
  • GREEN box:  repeating element with T1 -> SB -> TS relationship
  • ORANGE box: Identification items for the student's transcript (name, address, DOB etc.)
  • When breaking down the individual lines, the file is POSITIONAL with specific start / end character locations which define the specific record components
  • Understanding the FF relationship between records is key to correctly defining the FF record profile

Using the same color coding on the screen cap, the image below shows the FF Profile hierarchy configuration.

Overall profile options were setup for "File Type" = "Data Positioned" (others were left as default)

   Options configuration for the FF profile

Data Elements were created, and configuration for "Detect This File Format By" was set to "Specified unique values"


Positional Start Column, Length & Identity Format configuration was set for each level and element:

For each of the fields identified, configuration was setup 

Scenario 2 - How do I work with packed number formatting?


One of the fields within the source flat file contained 6 digits ("006000"), but in reality it represents a decimal value with the last three characters being the implied decimal location.

Part of the profile options allow you to specify the Data Format, in which you can say it's a Number type and an "Implied Decimal" value.  This will auto-convert that field to 006.000 for the output map without having to do any additional math function (my first attempt was to divide by 1000).

Implied Decimal format for packed value


Scenario 3 - Mapping the FF profile to EDI (4010-130 - Transcripts)

Though the Boomi Suggest will map many of the fields based on past customer maps, there maybe a few items in the EDI profile that need to be configured based on your needs.

For me, I needed to create two identifier loops for N1 (N101=AS and N101=AT) to accomplish the desired output and simplify my mapping.  Visit this LINK if more information about the identifier instances is required.

EDI Identifier Instances

Some of the target EDI loops also needed to be adjusted for the "Looping Option" to be set to "Occurrence" as opposed to the default "Unique" selection in order for the output results to next as expected.


Common Errors

Error: Data is not being presented as expected

Sometimes the source profile needs to be adjusted based on how the actual data is flowing through and where the record identifiers are placed.  Make sure the data is represented in the profile the same way as the FF itself.  You can drag / drop elements and records between levels in the profile.  I would also suggest targeting an XML or JSON profile you manually create to test out the data (easier to see the results than in an EDI file). 


Normally when I teach the various ‘copy’ features in the platform, I follow up, in jest, with “BUT do not ever use these features!” Of course I would never advocate avoiding capability in the platform, but I want the reader to seriously consider the implications of ‘copying’. They can land you in a real mess; Something that resembles a nasty spider web!


In this post:


Shallow vs. Deep Copying

Before getting to far into this topic, I want to define two concepts of object copying. I will describe these concepts with a trivial example using familiar Boomi terms.


Consider the familiar map that references a source and destination profile:

sample map illustration


Shallow Copy

If I make a shallow copy of this map, I will have 2 maps (the original and copy), both referencing the same source and destination profiles. Therefore, if I make a change in the source OR destination profiles, that change will be reflected in both maps!


shallow copy illustration


Deep Copy

If I make a deep copy of this map, not only will I have 2 maps (the original and copy) BUT I will also have a copy of the source and destination profiles. The new map copy references the new source and destination profile copies. Making changes in the original source and destination profiles will not affect the map copy. Conversely, making a change to the source and destination profile copies will not affect the original map.

deep copy illustration


Our Starting Point

With the understanding of shallow vs. deep copying, lets use a slightly large example. Consider the following layout:

Project Alpha contains one Process, ‘INT001 – Get Accounts (DB > File)’. I’ve denoted the associated components and references. The process references two Connections, two Operations, and a map (indicated by black arrows on the right). The Map references a Cross Reference Table, two Profiles, and a Map Function (indicated by red arrows on the left).


For the following cases, consider a scenario where Project Beta (similar to Alpha) is about to begin and the developer wants to use parts or all of Project Alpha as a starting point....


Copying in the Component Explorer

There are several scenarios (seven to be exact) when copying components in the Component Explorer...


Copying Components


Copying a component that DOES NOT reference another component from the Component Explorer

Copying the XML Profile, “Accounts” into folder ‘2. Project Beta’, will create a new XML Profile component. This new XML Profile component will not be referenced by any other component.


[Shallow] Copying a component that references another component from the Component Explorer

Copy Component Dependents UNCHECKED – Copying the Map will create a new Map component, BUT both original and new Maps will reference the same source and destination Profiles, Cross Reference Table, and Map Function components. Making a mapping change in the new Map will not affect the old Map mappings, BUT altering either Profiles, Cross Reference Table, or Map Function components will affect BOTH Maps.

Notice how the map in folder '2. Project Beta' references profiles in folder '1. Project Alpha' and Map Function in folder '#shared'.


[Deep] Copying a component that references another component from the Component Explorer

Copy Component Dependents CHECKED - Copying the Map will create a new Map component, AND create new source and destination profiles, Cross Reference Table, and Map Function components. Making a change in the new Map, Profiles, Cross Reference Table, or Map Function components will not affect the old Map, Profiles, Cross Reference Table, or Map Function components (and vice versa). Additionally, all referenced components will be newly created in the SAME folder.

Notice how all referenced components of the map are copied into folder '2. Project Beta'.


Copying folders

[Shallow] Copying a folder; with subfolders checked

Copying a folder structure will duplicate all components within that original folder structure into the Destination Folder (folder structure maintained). Component references within the structure are maintained in the copy, therefore changes made in the copy will not affect the original (and vice versa). BUT, Component reference outside the copied structure are maintained, therefore changes to referenced components outside the copied structure WILL affect old and new copies.

Notice how components in the new folder '1. Project Alpha' reference components in the '#shared' folder, while referential integrity within the subfolders are maintained.


[Deep] Copying a folder; with subfolders checked

Copying a folder structure will duplicate all components into the Destination Folder (folder structure maintained). Component reference within the structure is maintained in the copy, therefore changes made in the copy will not affect the original (and vice versa). BUT, Components reference outside the copied structure will also be copied into the folder(s) they are referenced, therefore changes to referenced components outside the copied structure will affect old and NOT new copies.

Notice how the all the components '#shared' folder were copied, even-though it was not a subfolder of '1. Project Alpha'.


[Shallow] Copying a folder; with subfolders unchecked

Copying a folder will duplicate all components into the Destination Folder. Component reference within the folder is maintained in the copy, therefore changes made in the copy will not affect the original (and vice versa). BUT, Components reference outside the copied folder is maintained, therefore changes to referenced components outside the copied folder will affect old and new copies.

Notice how the all the components in the new folder '1. Project Alpha' reference components in the '#shared' folder and '1. Project Alpha/#Alpha Shared' folder.


[Deep] Copying a folder; with subfolders unchecked

Copying a folder will duplicate all components into the Destination Folder (folder structure maintained). Component reference within the folder is maintained in the copy, therefore changes made in the copy will not affect the original (and vice versa). BUT, Components reference outside the copied folder will also be copied into the folder, therefore changes to referenced components outside the copied folder will affect old and NOT new copies.

Notice how the all the components in '#shared' and '1. Project Alpha/#Alpha Shared' are copied into the new '1. Project Alpha' folder.


Copying on the Canvas

Copying a shape on the process canvas is NOT the same as copying components in the Component Explorer.


Copying a non-component shape from the canvas to another canvas

Copying and pasting the Message shape on the canvas will create a new Message shape. Any changes made to either shape will not affect the other.  

Copying a component shape from the canvas to another canvas

Copying and pasting the Map shape on the canvas will not create a new Map component: the two shapes will continue to reference the same Map component (and subsequently the referenced Profiles and Map Functions).



These cases are small and sometimes trivial. But, now think about doing a deep copy in the Component Explorer on a Process that references many components, that reference other components, spanning multiple folders! Shared components are now replicated, Connection components duplicated (think license implication on deployment). The web of references is dizzying……Bottom line: always be mindful when copying, especially making deep copies.


Lee Sobotkin is a Senior Integration Consultant at Dell Boomi. He is not very fond of spiders nor webs.

Boomi understands that up-to-date information about platform availability is critical to your business.  So we are enhancing our Platform and Atom status and incident reporting tool.  We are replacing our Trust site ( with our new Status site ( to offer many new benefits, including notification options that are more robust, timely, region-specific, and granular in content. 


Our new site is now available.

To minimize disruption, our new and existing systems will run in parallel until August 8, 2018 so you can define, adjust, and optimize notification subscription options. For customers who use the Trust RSS feed (, this time will allow you to make the minor changes necessary for the new RSS feed options.  To learn more about the RSS options, please read my RSS article which includes examples. 

We have created an FAQ in our Knowledge Center to provide more information. You may wish to follow this FAQ to be notified of updates.


We look forward to bringing you enhanced Boomi Trust resources and will communicate additional information as we progress towards to launch.

Dell Boomi provides numerous options to architect your runtime platform. You can place your runtime engine in Dell Boomi cloud, private cloud, locally, and even in a hybrid environment. There are different runtime configurations such as a single Atom, a Molecule, and group of Atoms to support various customer requirements. Out of those runtime options, Molecule is the most efficient way to support high volume with scalability. As companies are moving their applications and infrastructure on various cloud providers such as Amazon, Azure, and Google Cloud, it is mandatory to keep Boomi runtime at one of those cloud vendors. With Dell Boomi, it is very quick to install Boomi runtime into any cloud infrastructure. This blog is going to explain in detail the steps involved to install a Boomi Molecule runtime on Google Cloud. The steps involve provisioning of VMs (nodes) and a shared file server on the Google Cloud Platform followed by the installation of a Dell Boomi Molecule on the shared file server.

I shall outline the process of setting up a Molecule on the Google Cloud Platform by using free Google Cloud offerings. The complete process may vary according to requirements of a customer based on various factors such as CPU load, I/O, RAM, storage type, storage amount, and network security. A basic understanding of the Boomi Platform, Cloud concepts, and Linux commands is sufficient to complete this setup. Initial steps will focus on setting up Google Cloud environment followed by installation on Boomi Molecule runtime.

I look forward to your feedback and will do my best to efficiently respond to each comment or question.



Create a Shared File Server

  • Click on “Cloud Launcher” to look for single node file server VM. There are many file servers available on Google Cloud and “single node file server” will be used as a shared file server for this setup. Refer to all file servers available on Google Cloud.

  • Search for “file share” and select “Single node file server”

  • Click on “LAUNCH ON COMPUTE ENGINE” to deploy the file server.

  • Modify the below screen as per requirements. Don’t forget to keep a note of the region that has been selected. The same region should be used while creating VM instances. Click “Deploy” after all configuration is complete. Refer to single node file server for more information.

  • The next screen will highlight the basic information regarding the file server such as OS, associated software, and access details. Make a note of the mount command as it will be helpful in mounting the shared storage from the nodes.

  • The shared file server has been created.


Create a Node in Google Cloud

  • Open VM instances.

  • Click “Create” in VM instances

  • Complete the below form as per the requirements and click “Create”.

Note: The VM can be configured as per your requirements or guidelines from the customer such as OS, Zone (should be same as Shared File Server’s zone), firewall etc.

  • Once the instance has been created, it will appear as below:


Installing Molecule on Google Cloud

  • Open node1 via SSH by the feature provided in Google Cloud.

  • A new browser window will open showing an SSH connection to the VM. There are other ways to connect to Google Cloud such as gcloud command, PuTTY, etc.

  • Run the below commands to set up the VM instance
    • To update an existing package
  • sudo yum update
  • Make below directories to install molecule
  • sudo mkdir /mnt/data
  • sudo mkdir /mnt/data/molecule
  • Mount the shared file server to node1
  • sudo mount -t nfs google-cloud-boomi-file-share-vm:/data /mnt/data
  • Create a new user and make it the owner of the shared file server location
  • sudo useradd -u 510 boomi
  • sudo ln -s /mnt/data/molecule /usr/local/boomi/molecule
  • sudo chown boomi:boomi /mnt/data/molecule
  • sudo chown boomi:boomi /usr/local/boomi/work
  • Download and install JDK by using the command
  • Download Molecule installer and run it as “boomi” user

[boomi@google-cloud-boomi-node1 tmp]$ ./
Starting Installer ...
This will install a Molecule on your computer.
OK [o, Enter], Cancel [c]

Enter your user name and password or a token and supply a name for your Molecule.
Use the email address and password that you use to sign into your Boomi
AtomSphere account or select Token to use a Boomi-generated token.
User Name and Password [1, Enter], Token [2]

User Name

Molecule Name
The following entries are required if the installation machine requires a
proxy server to open the HTTP connections outside of your network.
Use Proxy Settings?
Yes [y], No [n, Enter]
Logging into Boomi.
Authenticating credentials
Choose an account.
Please select the account with which this Molecule will be associated. A
Molecule can only be associated with one account.
Boomi XXXX Account1 - [1]
Boomi_Sunny Bansal - [2]
Boomi XXX account 3 - [3]
Choose an installation directory.

Please select the location of the local and local temp directories. Both locations must be on the local disk and must have sufficient disk space for the Molecule's operation.
Local Directory
This setting sets the Working Data Local Storage Directory property
(com.boomi.container.localDir in the file). For more
information, see the User Guide.

Local Temp Directory

This setting overrides the default Java local temp directory (
in the atom.vmoptions file).
Create symlinks?
Yes [y, Enter], No [n]
Please read the following important information before continuing.

The following will be installed when you choose to continue:

Molecule - google-cloud-boomi-demo
Installation Directory - /mnt/data/molecule/Molecule_FS_Demo
Local Directory - /usr/local/boomi/work
Local Temp Directory - /tmp
Program Group - Boomi AtomSphere\Molecule - google-cloud-boomi-demo


Retrieving Build Number
Extracting files...

Downloading Atom Files
Downloading Molecule Files
Retrieving Container Id
Retrieving account keystore.
Retrieving account trustore.
Configuring Molecule.
The Molecule has been installed on your computer.
Molecule Name: google-cloud-boomi-demo
Preferred JVM: /usr/java/jdk1.8.0_161
Finishing installation...

  • Login to the Atomsphere platform and the new Molecule will appear.

  • Go to properties and add properties in the Advanced tab. Atom will be re-booted upon save.
    The format of Initial Hosts for Unicast is[7800].

  • Installation of the first node is finished.


Adding Additional Nodes

  • Create a snapshot of existing node1.

  • Create a new instance from the snapshot

  • Change the OS and select the “Snapshots” tab to choose the snapshot previously created.

  • All VM instances will show below.

  • Open node 2 via SSH.
    • Run the below commands.
  • sudo mount -t nfs google-cloud-boomi-file-share-vm:/data /mnt/data
  • start node2 atom
  • sudo su boomi
  • ./atom start


Key Points

  • The commands may vary according to the different OS’s you choose while creating VM instances AND Shared file server.
  • The packages such as apt and yum may vary based upon the OS you choose. E.g. for RHL, the yum package is used but for Debian, the apt package will be used.
  • Always run all the commands as sudo.
  • It is preferable to create a snapshot of the VM instead of creating a new VM from scratch. This will reduce the possibility of human error.
  • Refer to the logs created at /mnt/data/molecule/Molecule_FS_Demo/logs location, this will help to debug the issues related to molecule setup.
  • Modification to the file at the NFS server might be necessary to allow the NFS client to change the owner of the NFS location.

Note: command “sudo chown boomi:boomi /mnt/data/molecule” might fail if the below changes are not done in the NFS server file system.
File location: /etc/exports
Old content:
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)

New content:
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)



The Open Data Protocol (OData) is a data access protocol built on core protocols like HTTP and commonly accepted methodologies like REST. The protocol enables the creation and consumption of REST APIs, which allow clients to publish and edit resources, identified using URLs and defined in a data model, through the mapping of CRUD (Create, Read, Update, Delete) operations to HTTP verbs.



Resource Operation

OData leverages the following HTTP verbs to indicate the operations on the resources.

  • GET: Get the resource (a collection of entities, a single entity, a structural property, a navigation property, a stream, etc.).
  • POST: Create a new resource.
  • PUT: Update an existing resource by replacing it with a complete instance.
  • PATCH: Update an existing resource by replacing part of its properties with a partial instance.
  • DELETE: Remove the resource.


Basic operations like Create (POST) and Read (GET) obviously do not pose any challenges around concurrency. It is perfectly alright to have multiple concurrent transactions creating or reading resources or individual entities. However, it is different for updating and deleting.


Concurrent updating of a resource by two transactions that try to update the same resource at the same time can lead to problems like 'the lost update problem' - the second transaction will overwrite the update made by the first transaction and so the first value is lost to other concurrently running transactions that need to read the first value. These transactions will read the wrong value and will end with incorrect results.


OData V4 and Concurrency

OData V4 supports ETag for Data Modification Request and Action RequestThe ETag or Entity Tag is one of several mechanisms that HTTP provides for e.g. web cache validation, which allows a client to make conditional requests. This allows caches to be more efficient, and saves bandwidth, as a web server does not need to send a full response if the content has not changed. ETags can also be used for optimistic concurrency control as in the case of OData, as a way to help prevent simultaneous updates of a resource from overwriting each other.


An ETag is an opaque identifier assigned by a web server to a specific version of a resource found at a URL. If the resource representation at that URL ever changes, a new and different ETag is assigned. Used in this manner ETags are similar to fingerprints, and they can be quickly compared to determine whether two representations of a resource are the same.


From the OData V4 Protocol Specifications (Ref. OData Version 4.0. Part 1: Protocol Plus Errata 03 ): Use of ETags for Avoiding Update Conflicts
If an ETag value is specified in an If-Match or If-None-Match header of a Data Modification Request or Action Request, the operation MUST only be invoked if the if-match or if-none-match condition is satisfied.

The ETag value specified in the if-match or if-none-match request header may be obtained from an ETag header of a response for an individual entity, or may be included for an individual entity in a format-specific manner.


So for every update we must realize that the ETag value may be out-of-date, so when you try an update request, always first use the GET request to get the ETag of the specified entity.


Boomi OData V4 Connector

In the latest release of the OData Client Connector concurrency is now supported [V4 Only] through the mechanism as described above. Both headers if-match and if-none-match have been added as a configurable option 'Concurrency Mode' in the OData Client Operation. The Concurrency Mode determines how any supplied ETags should be evaluated. Furthermore, ETag support has been added for the OData Client Connector as a Document Property.



Example Use Case - Microsoft Navision 2017

As part of a full Boomi platform POC that involved our Master Data Hub (or 'Hub') with sources Salesforce, SAP and Navision (2017) we implemented the typical Channel Updates synchronization process by taking the Process Library synchronization templates and specifying these for the various sources. For the Channel Updates (for the 'Contact' Domain) that need to be updated in Navision we end up with the following typical (POC) 'CUD' process:



When we zoom in on the Map shape doing the transformation from the Hub (MDM) XML profile - fetched as part of the channel updates - to the JSON profile that we need to construct as part of the OData Update Request we see the following:



Naturally we recognize all the typical field mappings for the Contact domain as defined by us in the Hub on the left-hand side to Navision on the right-hand side. More important in the light of this article we further zoom in on the 'editLink and ETag' function that will build the editLink and will get the ETag for the Entity at hand and set the ETag Document Property accordingly.



We use a Connector Call as part of the second branch in this function to retrieve the ETag for the Entity we want to update and set the ETag Document Property accordingly:



Now when a Data Steward makes a change in one of the 'Contact' Golden Records - perhaps to solve a Data Quality Error - a channel update for Navision will be pending (as in this case Navision is setup as a source that will accept channel updates):



Now, the Data Steward changes the Phone number for the 'Contact' Golden Record in the Master Data Hub from +31243880000 to +31243882222...



As said, this generates Channel Updates including one for Navision:



When we now execute the Fetch Channel Updates process for the Contact Domain in AtomSphere we can see the following...

First of all, we see the Channel Update going down the expected path or branch - the one that takes care of updates:



Second of all we can inspect the response document to see the update was actually applied!



Also, inspecting the Atom logs, we can see how indeed the ETag nicely got applied in the If-Match header as configured as part of our OData Client Connector Operation!


sendRequestHeader] >> PATCH /xxx_xx9999nl_a/ODataV4/Company('Acceptatie%20Xxxxxxxx%20Xxxxxx')/Contact('C00004') HTTP/1.1
sendRequestHeader] >> Accept: application/json;odata.metadata=full
sendRequestHeader] >> Content-Type: application/json;odata.metadata=none
sendRequestHeader] >> OData-MaxVersion: 4.0
sendRequestHeader] >> OData-Version: 4.0
sendRequestHeader] >> Content-Length: 206
sendRequestHeader] >> Host:
sendRequestHeader] >> Connection: Keep-Alive
sendRequestHeader] >> User-Agent: Apache-Olingo/4.3.0.boomi2
sendRequestHeader] >> Authorization: NTLM XXXXXXXXX==



Ref. Some of the basic statements around OData as a standard were taken from and

Boomi MDM has a new name! At its core the application is a master data integration platform, and so with this release, the application is renamed Master Data Hub — Hub, for short.


Boomi defines synchronization as the action of causing a set of trusted records to remain identical in multiple sources. Many Boomi customers are solving master data challenges today by connecting systems and integrating data. But as Chief Product Officer Steve Wood recently noted in this blog post, "We know from listening to our customers that just connecting siloed cloud or on-premise data isn’t enough. You need to manage your data quality and synchronization among applications."


With Hub’s synchronization capability, we enable customers to standardize the nouns of their business (Customers, Products, Employees, Vendors, etc.) and ensure they are accurate, consistent and complete. And as systems or users with varying levels of trust update that information, we ensure these "golden records" comply to business standards before propagating those updates to many downstream systems or business processes. All of this is enabled through the Boomi platform's unprecedented ease of use and reusable features that support quick time to value.


It is important to note that the practice of Master Data Management (MDM) is still supported on the Boomi platform. The Boomi MDM connector supports the integration of data with the Hub and the MDM Atom Clouds support the setup and maintenance of master data repositories. We will continue to enable and enforce MDM and Data Governance programs. There is no change to the direction in which we are synchronizing. 


And with the power of the entire Boomi platform, we can extend these practices even further. Our vision is to leverage the Master Data Hub as a force multiplier for the connected business. As you are integrating with file systems and applications OR enabling data on-boarding and multi-step approval workflows, you may realize that a hub architecture may best fit your use cases so you can pivot these critical data sets more broadly across the business. The Master Data Hub will continue to reflect these design patterns and drive more insights to ensure you perform this quickly and safely.


If you are an AtomSphere Process Developer and want to explore examples with Master Data Hub, please review the updated Getting Started Guide.


We look forward to your feedback and ideas as we continue to embark on this synchronization journey!