Skip navigation
All Places > Boomi Buzz > Blog
1 2 3 4 Previous Next

Boomi Buzz

59 posts

Low-code app development just got a lot more efficient, and a whole lot more elegant. You can now have the best of cloud, along with all the advantages of on-prem, with Boomi Flow.


Boomi Flow now lets you store your runtime data at any location of your choice. You can also restrict your apps to specific regions, if you like. We are bringing crisp new Boomi Flow capabilities for you! 



What new capabilities?

  1. With the new Flow external storage feature, you decide where you would like your application data to reside.
  2. With the new Flow multi-region feature, you have the option of restricting access to your apps only to specific regions.


Why is this exciting?

Let’s use a hypothetical example to understand how powerful the new capabilities are. Say you want your runtime data to be in France, and the enterprise apps you build with Flow to be accessible only in Singapore. You can now configure this in a few minutes.


Here’s another. Say you want to store your runtime data in the UK, and never have it leave the UK; while restricting access to your apps to end-users based in the UK only. Sure. And so on and so forth.


Having an external data store outside the realms of where the Flow platform stores your runtime data, gives you even more granular control. With the new Flow capabilities, even though you are using a cloud-based app development platform, you can decide where your runtime data ultimately resides.


You can use the external data store to meet specific data residency and compliance requirements. You could, say, have your data reside only in a server inside your corporate headquarters in say, EMEA or APJ. Even though your app is available worldwide. Bespoke security configurations? Bring it on!


As you may have guessed by now, all this lets you get the best of three worlds - low-code, cloud, and on-prem.


  1. Building your apps with Flow means you can use our low-code app development platform, dragging and dropping components to a canvas; while the engine auto-generates the metadata for you.
  2. The Flow app development platform is cloud-based, you do not have to install or update any software. Multiple stakeholders within your company can collaborate and build an app.
  3. Now you can use external storage as well; where your data is stored where you want it to be, and delivered from a cloud instance closest to your builder or user. You can choose to have the data store on-prem within your corporate IT infrastructure, or to a cloud instance of your choice.


The Flow platform running multiple cloud instances in different regions gives you the power to run Flow apps wherever you want with even better content delivery.  


Your data, where you want it. Your apps where you’d like them to be available.


How does the external store work?

When you create a store, what you are doing is providing an HTTPS-accessible endpoint to the Flow platform to access/store the runtime data from your apps.


The Flow engine, in turn, generates a platform key and a receiver key pair; which are used to sign, verify, encrypt and decrypt messages sent to and from the external storage provider. The endpoint adheres to the Flow External Storage API.


You can configure your tenant to use your external store only, to use Boomi Flow cloud only, or to use a combination of both.


What happens if I do not want to use an external store?

You absolutely do not have to do anything new or different! Right now, when you run your apps built with Flow, your runtime data is saved to the platform.


If you like, you can continue to use the platform for storage, and not create an external store at all. Depends on what you want, and what your business/compliance/data residency needs may be.


The default Flow data store in the Boomi Flow Cloud is managed by the Boomi platform. Having the runtime data stored and managed by Boomi frees you up from having to worry about storage, provisioning, or security.


How can I set up external storage?

We have an implementation guide for you, with step-by-step procedures on how to configure your tenant and set up an external store.


It is easier than you think!


What are the runtime regions available?

Currently, the runtime has instances in the following regions:

  • AU: Australia (Sydney, AU)
  • GB: Europe (London, GB)
  • US: North America (Virginia, US)

Based on where your end-users are running the app, the Flow engine automatically chooses the closest Boomi Flow Cloud instance to deliver the content.


You do not need to do anything extra! The Flow engine automatically detects where the app is being run, and uses the cloud closest to where your end user is. Better and greater data residency options for you. With the side effect of increased performance!


Let’s talk about restrictions now.

You can use the inbuilt editor in the drawing tool to set up the restrictions via the Flow API at the moment.  These restrictions can be set up at the flow level, where you can restrict specific flows;  or at the tenant level, where you can restrict all flows of a particular tenant to be restricted/available to specific regions.



The Dell Boomi, Flow team members that led the external storage functionality include our Flow architect and principal engineer Jonjo McKay, along with Jose Collazzi, Jacek Wojciechowski and Tom Fullalove, working with the Flow product head honcho in Boomi, Manoj Gujarathi. Here’s a shout to the Flow team for exceeding our expectations!


Live long and prosper. And, happy building!



1] This is a technology preview version and is not a production release. We recommend you wait till we finalize the features based on our user feedback, and launch the final production release, to start using external storage for your production environment.  

2] What do you think of this feature? We would love to hear from you! Please write in to with your thoughts, questions, comments.


Sophie Banerjee is Content Manager, Boomi Flow.

With Dell Boomi Master Data Hub (Hub) you can implement different MDM architectural styles like Consolidation, Centralized, Coexistence and Registry.


Consolidation style

Consolidation is a typical starting point to implement MDM solution. In Consolidation there are systems that contribute the data to the MDM repository. These systems can be either on-premise application like Oracle Database or SAP R/3, or cloud based application like or ServiceNow.




Concepts: Source

In MDM implementations there are systems that are data providers or data consumers. In Boomi these systems are called as source.
When a source is attached to a data model it is configured to be a data provider and/or data consumer. A source that is a data provider contributes data and a source that is a data consumer accept channel updates.


Data Model Sources

After configuring and deploying a data model to a repository you attach source systems to the model.
Below is an example where three systems is attached to the Customer model that is in repository called Hub Repository.



Integration implementation

For each system that contribute data to the model there is created an integration process.
Below is an example that send SAP Customer (DEBMAS) records to the Hub.


Integration process in Boomi:

Customer upsert operation to the Hub from SAP :

Data synchronization

Below is an example golden record. The record is from SAP and SAP ID (KUNNR) is shown as Entity ID.

The record is not yet consolidated or distributed with other systems (MySQL and and



Registry style

In registry style implementation the master data exists in source systems.
Boomi Hub match functionality can be used in registry style implementation to identify duplicate records.
Source system IDs, fields that are used for matching and potentially reference/link to the source data are stored in Boomi Hub.

It should be noted that implementing registry style for a master data hub doesn't necessary mean that the hub is purely used as an index or a registry.




Coexistency style

In coexistency style implementation data authoring is distributed. Golden records are in the master data hub, but some data source systems might not be synchronized.
Some systems are data providers, some are data consumers and some are both.
In Boomi a source that is a data provider contributes data and a source that is a data consumer accept channel updates, or it could be both. (see chapter 'Concepts: Source')
Coexistency is a typical implementation with Boomi Master Data Hub.



Centralized style


In centralized style implementation data authoring is done in the master data hub.


Workflow automation and app development capabilities of Boomi Flow would be used to create fully customizable user interfaces for data authors.



Below is a simple example of custom UI created with Flow for master data.


It’s easy to create, update, and delete a single Google Sheet, but do you know how to make changes to multiple objects? In this article, we’ll show you how to build this functionality into any process. Check out this previous post for the basics of working with the Google Sheets connector.




General process

Every process in this article shares a similar pattern. The only difference is in the Input Data needed and the Operation performed by the Google Sheets shape.

The three relevant shapes are:

  • "INPUT DATA" Message shape. This shape varies for each operation, providing the spreadsheetId, Sheet IDs and other relevant input data for running the operation.
  • The "Split Documents" Data process shape. This shape uses the same configuration in every example:

  • The Google Sheets Connector shape. This shape specifies the Action, Connection, and Operation.
    • The Action is selected as: Create, Delete, Get, Query, or Update.
    • The Connection ties it to a Google Drive account that has permission to perform the desired operations. Community article March 2017 Release Deep Dive: Google Sheets Connector provides the steps needed to configure the connection.
    • The Operation defines what the connector shape should do and its configuration is provided for every scenario.


How to get the spreadsheetId and sheetId

We need the ID of each spreadsheet we use (named spreadsheetId). The ID is the alphanumeric (including underscores and dashes) string between /d/ and /edit. The sheetId is the numeric string following "gid=" that identifies the particular sheet. The ‘Sheet Title’ is the custom name a user can assign to each Sheet.


Create, Update, and Delete spreadsheets through documents

This section covers three scenarios: creating spreadsheets, changing spreadsheet names, and deleting spreadsheets.


Create multiple spreadsheets with multiple input documents (Scenario 1)

In this scenario, our goal is to create three different spreadsheets sending three input documents.

  1. Copy the generic Process and set the Message shape to the following text. Make sure to add the quotes properly:
    '{"spreadsheetTitle": "SpreadSheet 1"}
    {"spreadsheetTitle": "SpreadSheet 2"}
    {"spreadsheetTitle": "SpreadSheet 3"}'
  2. Modify the Google Sheet Connector shape:
    1. Set the Action to CREATE.
    2. Select the Connection you want to work with.
    3. Select or create an Operation ("BATCH Create Spreadsheets" is a sensible name)
      1. Click the Import button and set these values:
      2. Click Next.
      3. Select the object type: Spreadsheet.
      4. Click Next, Finish, and Save and Close.
    4. Click OK to close the Connector Shape panel.
  3. Click Save, Test, select your desired Atom, and Run Test.


Update spreadsheets titles sending multiple input documents (Scenario 2)

In this scenario, our goal is to update three spreadsheet titles sending three input documents.

  1. Copy the generic Process and set the Message shape to the following text. Change each spreadsheetId to the ID you want to rename. Make sure to add the quotes:
    '{"spreadsheetId": "1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I", "spreadsheetTitle": "Spreadsheet 1 Updated"}
    {"spreadsheetId": "1dA5OHwTER8jBe3D_RSSumDcvZSDH-1L7w8bhqHquK28","spreadsheetTitle": "Spreadsheet 2 Updated"}
    {"spreadsheetId": "1cdgPl9F5jrQnT38Sk1N2fm4Jns4FT8fDnGdjj9bdoDU","spreadsheetTitle": "Spreadsheet 3 Updated"}'
  2. Modify the Google Sheet Connector shape:
    1. Set the Action to UPDATE.
    2. Choose the Connection you want to work with.
    3. Choose or create an Operation ("BATCH Update Spreadsheets Title" is a sensible name)
      1. Click the Import button and set these values:
      2. Click Next.
      3. Select the object type: Spreadsheet.
      4. Click Next, Finish, and Save and Close.
    4. Click OK to close the Connector Shape panel.
  3. Click Save, Test, select your desired Atom, and Run Test.


Delete spreadsheets sending multiple input documents (Scenario 3)

In this scenario, our goal is to delete three spreadsheets sending three input documents.

  1. Copy the generic Process and set the Message shape to the following text. Change each spreadsheetId to the ID that you want to rename. Make sure to add the quotes:
  2. Modify the Google Sheet Connector shape:
    1. Set the Action to DELETE.
    2. Choose the Connection you want to work with.
    3. Choose or create an Operation ("BATCH Delete Spreadsheet" is a sensible name)
      1. Click the Import button and set these values:
      2. Click Next.
      3. Select the object type: Spreadsheet.
      4. Click Next, Finish, and Save and Close.
    4. Click OK to close the Connector Shape panel.
  3. Click Save, Test, select your desired Atom, and Run Test.


Create and Update data through documents

This section covers two scenarios that send multiple input documents: one scenario creates record data, and the other updates record data.


Create RecordData sending multiple input documents (Scenario 1)

In this scenario, our goal is to add three rows to a sheet, sending three input documents.

  1. Copy the generic Process and set the Message shape to the following text. Change the spreadsheetId, Sheet Title and the data you want to record. Make sure to add the quotes:
    '{"Last Name": "Trotter", "Name": "Michael", "sheetTitle": "Sheet1", "spreadsheetId": "1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}
    {"Last Name": "Wallace", "Name": "Vanessa", "sheetTitle": "Sheet1", "spreadsheetId": "1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}
    {"Last Name": "Summons", "Name": "William", "sheetTitle": "Sheet1","spreadsheetId": "1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}'
  2. Modify the Google Sheet Connector shape:
    1. Set the Action to CREATE.
    2. Choose the Connection you want to work with.
    3. Choose or create an Operation ("BATCH Record New Data" is a sensible name)
      1. Click the Import button and set these values:

        Note: The option "Has 1st Row of headers?" can be left unchecked. In that case, the fields must be referenced by the column letter in the Message shape.
      2. Click Next.
      3. Select the object type: RecordData.
      4. Click Next, Finish, and Save and Close.
    4. Click OK to close the Connector Shape panel.
  3. Click Save, Test, select your desired Atom, and Run Test.


Update RecordData sending multiple input documents (Scenario 2)

In this scenario, our goal is to update three rows in a sheet sending three input documents, as follows:

  1. Copy the generic Process and set the Message shape to the following text. Change the spreadsheetId, Sheet Title, and the data you want to record. Make sure to add the quotes:
    '{"rowIndex": 2 , "A":"Jamie", "B":"Vance", "sheetTitle": "Sheet1", "spreadsheetId": "1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}
    {"rowIndex": 3 , "A":"Julio", "B":"Akers", "sheetTitle": "Sheet1","spreadsheetId": "1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}
    {"rowIndex": 4 , "A":"Ethan", "B":"Verne", "sheetTitle": "Sheet1","spreadsheetId": "1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}'
  2. Modify the Google Sheet Connector shape:
    1. Set the Action to UPDATE.
    2. Choose the Connection you want to work with.
    3. Choose or create an Operation ("BATCH Update Data" is a sensible name)
      1. Click the Import button and set these values:

        Note: The option "Has 1st Row of headers?" can be checked. In that case, the fields must be referenced by their header in the Message shape.
      2. Click Next.
      3. Select the Object Type: RecordData.
      4. Click Next, Finish, and Save and Close.
    4. Click OK to close the Connector Shape panel.
  3. Click Save, Test, select your desired Atom, and Run Test.


Create, Update and Delete sheets through documents

This section covers four scenarios: two for creating sheets, and one each for updating and deleting sheets.


Create sheets sending multiple input documents (Scenario 1)

In this scenario, our goal is to create three sheets sending three input documents, starting with a spreadsheet that only has Sheet1:

  1. Copy the generic Process and set the Message shape to the following text. Change the spreadsheetId and the desired Sheet Title. Make sure to add the quotes:
    '{"sheetTitle": "Sheet0", "spreadsheetId":"1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}
    {"sheetTitle": "Sheet2", "spreadsheetId":"1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}
    {"sheetTitle": "Sheet3", "spreadsheetId":"1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}'
  2. Modify the Google Sheet Connector shape:
    1. Set the Action to CREATE.
    2. Choose the Connection you want to work with.
    3. Choose or create an Operation ("BATCH Create Sheets" is a sensible name)
      1. Click the Import button and set these values:
      2. Click Next.
      3. Select the object type: Sheet.
      4. Click Next, Finish, and Save and Close.
    4. Click OK to close the Connector Shape panel.
  3. Click Save, Test, select your desired Atom, and Run Test.


Create sheets in a specific order sending multiple input documents (Scenario 2)

In this scenario, our goal is to create three sheets sending three input documents in a specific order.

  • The spreadsheet’s status before the process execution is:
  • The spreadsheet’s status after the process execution is:
  1. Copy the generic Process and set the Message shape to the following text. Change the spreadsheetId, Sheet Title, and the place you want them to be created. Make sure to add the quotes:
    '{"sheetIndex": 3, "sheetTitle": "Sheet 2.1", "spreadsheetId": "1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}
    {"sheetIndex": 2, "sheetTitle": "Sheet 1.1", "spreadsheetId": "1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}
    {"sheetIndex": 1, "sheetTitle": "Sheet 0.1", "spreadsheetId": "1TOrhqVJJuwzvxTmmLL178grqUdn0xQSa2-4mc-7l10I"}'

    NOTE: Since the index of each sheet depends upon its position, each new sheet displaces those with a greater or equal index, increasing their indexes by one. It is recommended to avoid creating several sheets at a time, since they may interfere with the others’ indexes. If that is unavoidable, it is recommended to create the sheets from greater to lesser indexes to avoid confusion.
    Here is the initial state of the spreadsheet:

    Here is the state of the spreadsheet when the first sheet is created in the sheetIndex 3:

    Here is the state of the spreadsheet when the second sheet is created in the sheetIndex 2:

    Here is the state of the spreadsheet when the third sheet is created in the sheetIndex 1:
  2. Modify the Google Sheet Connector shape:
    1. Set the Action to CREATE.
    2. Choose the Connection you want to work with.
    3. Choose or create an Operation ("BATCH Insert Sheets" is a sensible name)
      1. Click the Import button and set these values:
      2. Click Next.
      3. Select the Object Type: Sheet.
      4. Click Next, Finish, and Save and Close.
    4. Click OK to close the Connector Shape panel.
  3. Click Save, Test, select your desired Atom, and Run Test.


Delete sheets sending multiple input documents (Scenario 3)

In this scenario, our goal is to delete three sheets sending three input documents.

  1. Copy the generic Process and set the Message shape to the following text. Change the spreadsheetId and sheetId to the one you want to delete. The required input format for the ID to delete is spreadsheetId|sheetId. Make sure to add the quotes:
  2. Modify the Google Sheet Connector shape:
    1. Set the Action to DELETE.
    2. Choose the Connection you want to work with.
    3. Choose or create an Operation ("BATCH Delete Sheet" is a sensible name)
      1. Click the Import button and set these values:
      2. Click Next.
      3. Select the Object Type: Sheet.
      4. Click Next, Finish, and Save and Close.
    4. Click OK to close the Connector Shape panel.
  3. Click Save, Test, select your desired Atom, and Run Test.

Congratulations! You have successfully defined operations using the Google Sheets connector to create, update, and delete multiple objects at once.

Common Errors


"Row Index needs to be set as positive number"

Full text:

Test execution of Google Sheets Sample completed with errors. Embedded message: Row Index needs to be set as positive number.

This error message will happen if a required "rowIndex" property is not found in one or more objects. If any single object is properly formatted, it will be processed accordingly. Since the name is a case sensitive string, verify it is properly written and its value is a positive integer.



  • Google Sheets is a web-based application that you can use to create, update and modify spreadsheets and share the data live . For more information, see the Google Sheets website:
  • Object is an unordered set of name/value pairs. An object begins with { (left brace) and ends with } (right brace). Each name is followed by : (colon) and the name/value pairs are separated by , (comma).
  • Spreadsheet is an  file that contains one or more sheets.
  • SpreadsheetID is the alphanumeric identifier:
  • Sheet is a matrix (of column and row dimensions) of cells.
  • SheetId is the numeric gid identifier. When positioned at the desired Sheet, it can be found as If no gid is shown, you can assume 0 as the default.
  • Cell is the rectangle in a sheet that is the intersection of a column and a row.
  • Column is a group of cells that runs vertically from top-to-bottom. Columns are identified by letters with values from A to ZZZ.
  • Row is a group of cells that runs horizontally from side to side. Rows are identified by numbers.
  • Record Data is an internal concept in the Google Sheets connector which represents the data in a group of cells that runs horizontally from an initial column to a final column. Record data outside of the identified columns is not included in the profile.

We appreciate all the work our customers are doing to help build the Connected Business through the use of Boomi, so this year, we are excited to introduce the Boomi Customer Excellence Awards. The awards honor and recognize Boomi customers who have transformed their organizations, customers, or communities through inventive deployment of Boomi’s products and solutions in the areas of integration, API management, EDI, ROI, and innovation.


During Boomi World 2018 we will present Boomi Customer Excellence awards in the categories of ROI, Emerging Technology, Innovator, and Change Agent. All Dell Boomi customers using any Boomi solution are welcome and encouraged to apply.


To apply, click here: Customer Excellence Awards. Specific eligibility criteria can be found under Customer Excellence Awards FAQs.


Award winners in each category will be recognized in front of Boomi World attendees and Boomi executives during the awards ceremony. Please submit your Customer Excellence Awards application by September 28th 2018 in any of the following four categories:





Customers achieving exceptional business results and ROI using Boomi solutions


Customers in the early stages of their groundbreaking transformation journeys


Customers who continue to evolve and impact their business, customers and community


Individual within an organization that is pushing boundaries and disrupting the industry

Boomi Flow gives you the ability to build applications and business processes.  Think of it as integrating your humans with your systems and data. In this post I wanted to share some tips and best practices for using the Step Element in your next flow.


Simply put, a Step element allows a flow builder to create a simple user interface (UI), most often containing text and images. Let's learn more.



Reference Guide

Here is a link to a tutorial that you may find useful when using and configuring Step elements.


Step Element Usage


When to use a Step

A Step is a perfect way to present visual data to the Flow user, and give them options in the form of buttons. Conversely, If your goal is to collect data with components like input fields, toggles, or picklists, then a Page Element is the best choice. A Page has more capabilities than a Step, but requires more configuration. Stay tuned for a future post on Page elements; but now, back to the topic at hand. Here are a couple screenshots of the sort of views we often use Steps to create:


Mobile step with outcomes    Desktop process summary step

Connecting Steps with other parts of the flow

A step is the light blue element


A Step is the light blue element available to be dragged onto the build-canvas. Any Outcomes dragged from a Step, into another element, will appear at the bottom of the Step UI as buttons. The "Go" and "Approve" lines in the above image are Outcomes.


The text content of these buttons is defined by the label in the Outcome configuration dialogue that pops up after you drag a line from the middle of one element to the next. An Outcome with a label of "Approve" is rendered like this:


Sample 'approve' button


In the image above you can see the Outcome being rendered as a button. Interestingly, Outcomes can also be used on non-UI elements. In that case they provide logic, rules, and automated routing.


Configurations within a Step

While you're adding content to your Steps, HTML is being generated behind-the-scenes. Standard text added to a Step will generate a paragraph HTML tag by default.


For example, typing "My text content" will generate the following behind the scenes:

<p>My text content</p>


Of course, with Boomi Flow you don't have to worry about the code... unless you want to!


The rich-text editor includes the option to make-bold or italicize this text content.

Screenshot of configuration options


Within the Format menu of this rich-text editor you'll also find the ability to inject other varieties of style like block text, subscript & superscript. "Headings" are commonly used to style the size of style of text. Behind-the-scenes these generate the html heading tags.


Screenshot of formatting options

I've found a favorite format-configuration to be Heading 3 as the Step's 'title' (which generates the <h3> HTML tag when configured); with a combination of bold text, italicized text, and normal text to fill out content.

Other useful point-and-click configuration options

The Mountain icon lets you insert an image. You can either upload one on the fly, or use a file that you've already uploaded to your tenant's Assets.


Insert image icon


The Chain icon lets you insert a web link, to be opened in the same or a different tab.


Insert url icon


The Insert Value button lets you search and add a "value reference" to any of the values in your tenant. This is very handy for showing loaded or collected data, in text form.

Insert value button


Side-by-side example:

Value reference example



Going Deeper


Step source code


In the "View" menu of the Step element you're actually able to see and edit the HTML that's being generated.

Screenshot of view source


Within that view you can add the tag <hr> to generate a horizontal line. This is useful for separating sections of content.

HR Source HR Render


CSS and embedding

The Step source code allows you to leverage a number of HTML mechanisms. You can insert iframes in a Step (perhaps to embed a youtube video, or a document). 


You're also able to add inline style to any tags.


<h3 style="color: blue; margin-left: 30px;">Step Title</h3>


Additionally you could add a class within a tag, and then style it with code in the player or external style sheet. (Keep an eye out for a future post on the principles of a Flow player).



A Step element is relatively straightforward, but can be creatively applied in a lot of ways. We have a long-time Flow customer in the Healthcare & Life Sciences space that uses over a thousand call center agent conversation-scripting flows, comprised almost entirely of Steps and Outcomes. They also leverage a nifty history setting that can be turned on in the flow player. This setting show the agents a UI pane, that tracks the full path of selections (as well as the options that weren't selected) throughout the conversation.


Chris Cappetta is a Workflow Solutions Architect for the Boomi Flow team. He likes nachos.

In our next Community Champion spotlight, we talk with Leif Jacobsen.  Contributors like Leif make the Boomi Community a vital resource for our customers to get the most from our low-code, cloud-native platform.


As one person doing all the integration for one of Denmark’s largest companies, how do you manage all your integration projects?


Leif:  Well, when you have Boomi and its potential to speed development, you can develop new integrations very quickly. Then you don’t need anyone else.


With SAP, every time someone asked for an integration, the assumption was that it would take two or three weeks, or a month. But I can do it in two or three days with Boomi. Sometimes, in two or three hours. People are constantly impressed.


Read the full interview here:  An Integration Team of One: Q&A with Community Champion, Leif Jacobsen.


Look for more interviews with Community Champions coming soon!

If you’ve ever had to support a production application, you know how important logs can be.  Like most applications, the Dell Boomi Atom generates log messages that are written to files on disk. When problems arise, these logs help you figure out what is going on.  Sometimes troubleshooting is as easy as opening the log file and reviewing the most recent messages.  However, sometimes it’s not. Sometimes you need to correlate the Dell Boomi Atom logs with logs from other applications (possibly on different servers) or observe patterns in logs over time.  Attempting to do this manually can be a daunting task. This is where a centralized logging solution can really help. By centralizing all logging, including the Dell Boomi Atom log, into a single application, you gain the ability to easily search across multiple logs quickly.


I recently spent some time experimenting with a popular open source logging platform called Elastic Stack.  In this blog post we will step through what I did to install the Elastic Stack and how I configured it to ingest container logs.  By the end of this blog, we will have the following setup:

Dell Boomi Atom + Elastic Stack


In order to follow along with this blog, there are a few things you will need to set up first:

  • DockerTo simplify my setup, I chose to install all the Elastic Stack components with Docker.  If you don’t have Docker installed already, you can find information on how to install it at
  • Docker ComposeDocker Compose allows you to coordinate the install of multiple Docker containers.  For my setup, I wanted to install Elasticsearch, Kibana and Logstash all on the same server. Docker Compose made this easy.  You can install Docker Compose by following the steps outlined at
  • AtomI will be demonstrating sending container logs from a single Atom that is setup to use UTC and the en_US locale (the importance of this will be explained later).  I recommend using a fresh Atom so that you don’t impact anything else. Instructions for installing a local Atom can be found under the Atom setup topic in our documentation.  While I am only demoing an Atom, the ideas I cover in this blog can be applied to Molecules and Clouds as well.
  • Configuration FilesAll the configuration files referenced in this blog (filebeat.yml, docker-compose.yml, etc) are available on Bitbucket.  


What is the Elastic Stack?

Elastic Stack (formerly known as ELK) "is the most popular open source logging platform."  It is made up of four main components that work together to ship, transform, index and visualize logs.  The four components are: 




A lightweight agent that ships data from the server where it is running to Logstash or Elasticsearch.


A data processing pipeline that ingests data from Beats (and other sources), transforms it and sends it along to Elasticsearch.


A distributed, RESTful search and analytics engine that centrally stores all your data.


A data visualization tool that allows you to slice and dice your Elasticsearch data (i.e. your log data).


All four components were designed with scale in mind. This means you can start out small, like we will in this blog, and scale them out later to meet the demands of your architecture.


What is the Container Log?

Now that we’ve gone over what the Elastic Stack is, let’s take a look at the log that we are going to process. If you aren’t already familiar with the container log, I encourage you to read Understanding the Container Log.  For our purpose, the most important thing to understand is the format of the log. This information will be needed when we configure the Filebeat prospector and the Logstash pipeline in the next sections. As the “Log Structure and Columns” section explains, each log message is composed of five fields:


The best way to understand the structure is to look at some example log messages. Here are two from my container log: 

May 31, 2018 2:15:22 AM UTC INFO [com.boomi.container.core.AccountManager updateStatus] Account manager status is now STARTED


May 31, 2018 2:25:08 AM UTC SEVERE [com.boomi.process.ProcessExecution handleProcessFailure] Unexpected error executing process: java.lang.RuntimeException: There was an error parsing the properties of the Decision task. Please review the task ensuring the proper fields are filled out.
java.lang.RuntimeException: There was an error parsing the properties of the Decision task. Please review the task ensuring the proper fields are filled out.
        at com.boomi.process.util.PropertyExtractor.resolveProfileParams(
        at com.boomi.process.util.PropertyExtractor.initGetParams(
        at com.boomi.process.shape.DecisionShape.execute(
        at com.boomi.process.graph.ProcessShape.executeShape(
        at com.boomi.process.graph.ProcessGraph.executeShape(
        at com.boomi.process.graph.ProcessGraph.executeNextShapes(
        at com.boomi.process.graph.ProcessGraph.execute(
        at java.util.concurrent.ThreadPoolExecutor.runWorker(
        at java.util.concurrent.ThreadPoolExecutor$


The first log message is a simple single-line log message. The second log message is an example of a multi-line message that includes a stack trace in the MESSAGE field. Another thing that might not be obvious just by looking at the log is that the TIMESTAMP and LOG LEVEL fields are dependent on your time zone and locale. This means that if you have multiple Dell Boomi Atoms running in different locations, you might need to have different Logstash and Filebeat configurations (or at least more complicated grok patterns than I show later). As mentioned earlier, my Atom was configured to log using UTC and en_US.


Setting up Elasticsearch, Kibana and Logstash via Docker

Before we can start sending logs via Filebeat, we need to have somewhere to send them to. This means installing Elasticsearch, Kibana and Logstash.  Since I was just exploring Elastic Stack, I decided to install all three products on the same physical server (but separate from the server where my Atom was running) using Docker Compose. Running them all on a single server wouldn't be a good idea in a production environment, but it allowed me to get up and running fast.

  1. Clone the elastic-stack-demo repository and checkout the 'part1' tag.

    $ git clone

    Cloning into 'elastic-stack-demo'...

    $ cd elastic-stack-demo

    $ git checkout part1

  2. Start up the Elastic Stack using docker-compose.

    $ docker-compose up

    Creating elasticsearch ... done
    Creating kibana ... done
    Creating logstash ... done
    Attaching to elasticsearch, kibana, logstash
    elasticsearch | [2018-07-03T04:15:14,294][INFO ][o.e.n.Node ] [] initializing ...

  3. That's it. Once it all starts up, point your browser at http://<your_server>:5601 and see that it brings up Kibana. 

At this point, you have a Logstash pipeline running that is ready to consume and process container log messages.  To understand the pipeline a bit more, let’s take a look at the pipeline configuration file.

input {
     beats {
          port => 5044

filter {
  if [sourcetype] == "container" {
    grok {
        match => { "message" => "(?<log_timestamp>%{MONTH} %{MONTHDAY}, %{YEAR} %{TIME} (?:AM|PM)) %{WORD} (?<log_level>(FINEST|FINER|FINE|INFO|WARNING|SEVERE))%{SPACE}\[%{JAVACLASS:class} %{WORD:method}\] %{GREEDYDATA:log_message}" }
    date {
        match => [ "log_timestamp", "MMM dd, yyyy KK:mm:ss a" ]
        timezone => "UTC"
        remove_field => [ "log_timestamp" ]

output {
     elasticsearch {
          hosts => [ "elasticsearch:9200" ]

The input and output stages of the pipeline are pretty standard.  The pipeline is configured to receive log messages from the Elastic Beats framework and ultimately send them to Elasticsearch.  The interesting part of the pipeline is the filter stage. The filter stage is using the grok filter to parse container log messages into fields so that the information can be easily queried from Kibana.  It is also using the date filter to parse the timestamp from the log message and use it as the logstash timestamp for the event. This way the timestamp in Kibana will be the timestamp of the log message, not the timestamp of when the message was processed by the pipeline.


As a reminder, log message content is dependent on the Atom time zone and locale so the grok and date filters shown in this configuration might need to be tweaked for your time zone and locale.


Setting up Filebeat

Now that the Logstash pipeline is up and running, we can set up Filebeat to send log messages to it.  Filebeat should be installed on the same server as your Atom. There are multiple ways to install Filebeat. I chose to install it using Docker.  My install steps below reference a few variables that you will need to replace with your information:


YOUR_ES_HOST - The hostname or IP address where you just installed Elasticsearch.

YOUR_ATOM_HOME - The directory where your Atom is installed.

YOUR_CONTAINER_NAME - The name you gave your Atom.  This will be queryable in Kibana. 

YOUR_CONTAINER_ID - The unique ID of your Atom (aka Atom ID).

YOUR_LOGSTASH_HOST - The hostname or IP address where you just installed Logstash (in this example, it is the same as YOUR_ES_HOST).


Once you've collected that information, you can install and configure Filebeat on the Atom sever by following these steps:

  1. Start your Atom if it isn't running already.

    <YOUR_ATOM_HOME>/bin/atom start

  2. Pull down the Filebeat Docker image.

    $ docker pull

  3. Manually load the Filebeat index template into Elasticsearch (as per the Filebeat documentation).

    $ docker run setup --template -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["<YOUR_ES_HOST>:9200"]'

  4. Clone the elastic-stack-demo repository and checkout the 'part1' tag.

    $ git clone

    Cloning into 'elastic-stack-demo'...

    $ cd elastic-stack-demo

    git checkout part1

  5. Update the ownership and permissions of the Filebeat configuration file (see Config file ownership and permissions for more information on why this is needed).
    $ chmod g-w filebeat.yml 
    $ sudo chown root filebeat.yml
  6. Start the Filebeat Docker container.

    $ docker run -d -v "$(pwd)"/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro -v <YOUR_ATOM_HOME>/logs:/var/log/boomi:ro -e CONTAINER_NAME='<YOUR_CONTAINER_NAME>' -e CONTAINER_ID='<YOUR_CONTAINER_ID>'  -e LOGSTASH_HOSTS='<YOUR_LOGSTASH_HOST>:5044' --name filebeat

Once Filebeat starts up, it will use the prospector defined in filebeat.yml to locate, process and ship container log messages to the Logstash pipeline we set up earlier.  Let's quickly review how the prospector is configured.

#=========================== Filebeat prospectors =============================
- type: log
enabled: true
- /var/log/boomi/*.container.log
multiline.pattern: '^[A-Za-z]{3}\s[0-9]{1,2},\s[0-9]{4}'
multiline.negate: true
multiline.match: after
sourcetype: container
atomname: '${CONTAINER_NAME}'
containerid: '${CONTAINER_ID}'
fields_under_root: true

As you can see, the prospector is configured to:

  • Read all container log files that are present in the /var/log/boomi directory (which is mounted in the Filebeat container that points to your logs directory)
  • Handle multi-line messages so that log messages with stack traces are parsed correctly.  Note, you may need to adjust the multi-line pattern shown here if your Atom is using a different locale.
  • Add additional informational fields (sourcetype, atomname and containerid) to the output that is sent to Logstash.  These fields will end up as queryable fields in Kibana.   


Query the Container Logs

The last step you need to do before you can explore your log messages, is tell Kibana which index(es) you want to search.  This is done by creating an index pattern in Kibana.  

  1. Open up Kibana (http://<your_server>:5601).
  2. Click on Management.
  3. Click on Index Patterns.
  4. Enter ‘logstash-*’ as the Index pattern and click Next step.
  5. Select '@timestamp' as the Time Filter field name and click Create index pattern.
  6. Once created, you can explore the fields that are available in the new index.


It is finally time to test our setup end to end. Lets generate some log messages and search for them in Kibana.

  1. Stop your Atom. This will generate some log messages.

  2. On the Discover tab in Kibana, run a search for "Atom is stopping."
  3. Click on View single document to see all the fields that the Elastic Stack is tracking.


Isn't that much easier than searching the container log directly?  Searching for keywords is just the tip of the iceberg, I encourage you to explore the Kibana User Guide to see all the ways that Kibana can help you search, view and interact with the container log.


Are you using Elastic Stack (or another product) to centralize your Atom logs? If so, I'd love to hear about it.


Jeff Plater is a Software Architect at Dell Boomi.  He enjoys everything search, but not before his morning coffee.

I wanted to share a solution I recently developed to work with SWIFT messages in the Dell Boomi AtomSphere Platform. If you're not familiar, SWIFT messages are commonly used in the financial industry to exchange data between institutions. SWIFT messages follow a specific data format that does not align with any of the Profile types within AtomSphere so I developed Swift2XML, a Java library that converts SWIFT messages to/from XML so you can work within the data in Maps and other shapes. The library is generic and supports all SWIFT Message Types.



Getting Started


Get and Upload Libraries to AtomSphere

Please download the following libraries:


Then upload them to your AtomSphere account via Setup > Development Resources > Account Libraries.


Create Custom Library Component

Once uploaded, you can create a Custom Library component and include the three libraries added above:



Once created, don't forget to deploy the Custom Library to the Molecule/Atom where you will like to test the framework.


Create the XML Profile

Create a one XML Profile based on MT103.xml (in the project) or based on generated XML using Swift2XML (open the JUnit class and create a new testcase).



Use of Swift2XML in a Process

In your Boomi process, add a Data Process shape with a Custom Scripting step. Replace the default script with the following:


To convert from SWIFT to XML:

import java.util.Properties;
import com.boomi.proserv.swift.SwiftToXML;

for( int i = 0; i < dataContext.getDataCount(); i++ ) {
  InputStream is = dataContext.getStream(i);
  Properties props = dataContext.getProperties(i);

  String swift = SwiftToXML.inputStreamToString(is);
  String xml   = SwiftToXML.parseSwift(swift);
  is = SwiftToXML.stringToInputStream(xml);

  dataContext.storeStream(is, props);


And to convert from XML to back to SWIFT:

import java.util.Properties;
import com.boomi.proserv.swift.SwiftToXML;

for( int i = 0; i < dataContext.getDataCount(); i++ ) {
  InputStream is = dataContext.getStream(i);
  Properties props = dataContext.getProperties(i);

  String xml   = SwiftToXML.inputStreamToString(is);
  String swift = SwiftToXML.parseXML(xml);
  is = SwiftToXML.stringToInputStream(swift);

  dataContext.storeStream(is, props);



Example of a process Process

The following process read different SWIFT message types from the Disk, get the Message with type 103 and remove the 71A tag.



Decision using the message type


Removing a tag


Execution view


Execution view: Input SWIFT Message (type 103)


Please note the presence of the last element in block 4:


Execution view: XML Message generated by Swift2XML


Execution view: SWIFT Message generated by Swift2XML after removing the tag


For additional information, please visit my Github project: swift2xml. If you need to handle SWIFT, give it a try and let me know what you think. Thanks!


Anthony Rabiaza is a Lead Solutions Architect with Dell Boomi.

In this post I wanted to share my experience and a "How To" for building a Boomi process and triggering it whenever a user enters or exits a geographical area. This was the first Boomi use case I thought of when joining team. From a Boomi point of view it isn't complex at all. Nevertheless I've shared relatively step-by-step instructions.




Not so long ago I decided that I wanted more control of my heating bills. I invested in a wireless thermostat. It was great, looked cool on the wall and had some interesting features. I was excited when a new feature was introduced which allowed me to control the thermostats state purely based upon my mobile phones location. I tried it and I 'think' it worked. I wasn't totally sure what it's criteria was for adjusting the heating, turning it up or down and on and off. What are the thresholds for this? To get full control and learn about geofencing I decided that I'd like to see what's involved in building a similar system. Why not build it with Boomi !



There are a few elements required for this process:


  • Location Platform such as, etc. – I’ve chosen for this example as it offers a free plan to get started
  • NEST thermostat developer account – you can register here
  • Mobile Phone either Android or IOS
  • Boomi Connectors required
    • Boomi Web Services Server connector – available with Professional Plus, Enterprise, and Enterprise Plus Editions
    • HTTP Client Connector


Creating your Callback URI

We'll need to build our Callback URI which will allow us to authenticate our HTTP Client connector with our NEST OAuth Client. This is unique to each user account in Atomsphere. Login to Atomsphere and navigate to your username at the top towards the right and select "Setup" from the drop down menu. Take note of your "Account ID" from the "Account Information" section.


Append this to the the callback URl format as follows:<ACCOUNT_ID>/oauth2/callback



So in this case it would be:


Building a NEST

We'll need two things to configure the NEST side of the equation. An OAuth Client to authenticate our Boomi process with and a NEST thermostat. You can either use a live NEST that you own or the simulator that they provide.


Creating an OAuth Client

Now Go to and register/sign in to gain access. Once in we'll create a new OAuth Client which is effectively allowing us to authenticate our Boomi process with the NEST environment and provide permission to change the temperature. Then we'll download the NEST Simulator Application and use that in place of a real NEST. It's exactly the same API but allows you to test all works well and also use this example if you don't actually have a NEST.


Once logged in hit "Create New OAuth Client" and fill in all the mandatory fields. Paste in the Boomi Callback URI we created earlier into the Default OAuth Redirect URI field. Then go to permissions and select Thermostat. I chose to get read and write permissions as I'll need read permissions later when I build the NEST JSON profile. You'll need to type in a description into the permissions dialog which will be displayed to users upon authenticating.


Setup Nest OAuth Client


Once created take note of the OAuth section in which you'll find the:


  • Client ID
  • Client Secret
  • Authorization URL


We'll use these when we're configuring our HTTP Client connector.


Setting up the NEST Home Simulator Application

In reality you'd want to set this up to work with a real thermostat. If you have one that's fine. If not then the NEST Home Simulator will suffice. It's a Chrome plugin available here. Once it's download you should login with the same credentials as those for your NEST developer account. Then create a new thermostat and you're all set.

If you have a NEST thermostat then by all means use that instead of the simulator. Keep in mind that the NEST API has certain rules around the state that your thermostat is in when away from home i.e. ECO mode. I encourage you to have a read at the Documentation for the API here.



Building it out in Boomi

Now we have a virtual NEST thermostat setup and the OAuth Client created we can create a process in Boomi which will interact with it. The Boomi process will only have three shapes. A Web Services Service Connector at the start to listen for data coming from, a decision shape to route the process and then an HTTP Client connector for changing the temperature on the NEST. First of all however we'll create a test process to perform a GET request so we can built our a JSON profile.


Configuring the HTTP Client connector

Boomi doesn't have a predefined Application Connector for NEST at present so we will build one out of the HTTP Client Connector.


Start a new process and drag an HTTP Client connector onto the canvas. Create a new connection and insert the Client ID, Client Secret and Authorization URL that we recorded whilst creating our NEST OAuth Client previously. Input the Authorization URL from the NEST OAuth Client into the Authorization Token URL and the client secret and ID into the corresponding fields. Use the below table if you need to.


HTTP Client Connector Field

Authentication TypeOAuth 2.0
Client ID<Client ID from the NEST OAuth Client App>
Client Secret<Client Secret from the NEST OAuth Client App>
Authorization Token URL<Authorization URL from the NEST OAuth Client App>
Access Token URL 


We don't need to set any fields in the SSL Options or Advanced Tabs. Hit "Generate" and we will be taken to an authorization screen for our NEST OAuth Client app and allow our HTTP Client connector to generate a token for future use. You should see a screen similar to the below. Hit allow and we'll be redirected to a Boomi screen stating "Authorization Code Received". Close this window and you'll see on your Atomsphere screen that the "Access Token generation successful" message is displayed.


Now create a new Operation. No configuration is required here other than ticking the "Follow Redirects" check box. Once saved we can now make calls to our NEST OAuth Client app.


Blog NEST ConnectorOAuth 2.0 Authentication


Importing the NEST JSON Profile

Lets test this simple process and get back an outline of the content that the NEST API provides. Kick off a Test Mode run and select either a local atom or the atom cloud. You should receive JSON content back relating to your NEST Account. Viewing the Connection Data will display the JSON response. Something similar to the below. I've removed some of the structure. You'll receive more data than this in reality.


Note in the layout of this JSON that the id of my NEST Thermostat is "3hL-EH_gBl3APR7XB5JR9-qoPde-hzuH".

  "devices": {
    "thermostats": {
      "3hL-EH_gBl3APR7XB5JR9-qoPde-hzuH": {
        "humidity": 50,
        "locale": "en-GB",
        "temperature_scale": "C",
        "is_using_emergency_heat": false,
        "has_fan": true,
        "software_version": "5.6.1",
        "has_leaf": true,
        "where_id": "FydB4fkOWmyhzdeEDpGYOg7udugr_YTnOf3fhm3sJoGHw-q_8AY96A",
        "device_id": "3hL-EH_gBl3APR7XB5JR9-qoPde-hzuH",
        "name": "Guest House (2D5E)",
        "can_heat": true,
        "can_cool": true,
        "target_temperature_c": 13.5,
        "target_temperature_f": 56,
        "target_temperature_high_c": 26,
        "target_temperature_high_f": 79,
        "target_temperature_low_c": 19,
        "target_temperature_low_f": 66,
        "ambient_temperature_c": 5.5,
        "ambient_temperature_f": 42,
        "away_temperature_high_c": 24,
        "away_temperature_high_f": 76,
        "away_temperature_low_c": 12.5,
        "away_temperature_low_f": 55,
        "eco_temperature_high_c": 24,
        "eco_temperature_high_f": 76,
        "eco_temperature_low_c": 12.5,
        "eco_temperature_low_f": 55,
        "is_locked": false,
        "locked_temp_min_c": 20,
        "locked_temp_min_f": 68,
        "locked_temp_max_c": 22,
        "locked_temp_max_f": 72,
        "sunlight_correction_active": false,
        "sunlight_correction_enabled": true,
        "structure_id": "WoPouIo-IsQTuGgTd1HfbVtJ1y7XErHcj2hebmTEMyLPtovAH0PTpQ",
        "fan_timer_active": false,
        "fan_timer_timeout": "1970-01-01T00:00:00.000Z",
        "fan_timer_duration": 15,
        "previous_hvac_mode": "",
        "hvac_mode": "heat",
        "time_to_target": "~0",
        "time_to_target_training": "ready",
        "where_name": "Guest House",
        "label": "2D5E",
        "name_long": "Guest House Thermostat (2D5E)",
        "is_online": true,
        "hvac_state": "off"
  "structures": {
  "metadata": {

The above is just an example. Copy the actual output from your test mode run and save that to a text file.


Now we can create a new profile in Atomsphere and select the JSON Profile Format. Select Import and browse to the file we just created and upload it. This will build out our JSON profile which will be specific to that of our virtual NEST. We're now ready to interact with the NEST API.




Importing the Webhook JSON Profile

We'll need to create the profile of the data that will be received from We could configure and do some test runs to receive this data however to speed things up I've pasted their test run output below. Follow a similar process as above and build out a JSON profile for the Webhook.

  "event": {
    "_id": "56db1f4613012711002229f6",
    "createdAt": "2018-04-07T12:49:21.890Z",
    "live": false,
    "type": "user.entered_geofence",
    "user": {
      "_id": "56db1f4613012711002229f4",
      "userId": "1",
      "description": "Jerry Seinfeld",
      "metadata": {
        "session": "123"
    "geofence": {
      "_id": "56db1f4613012711002229f5",
      "tag": "venue",
      "externalId": "2",
      "description": "Monk's Café",
      "metadata": {
        "category": "restaurant"
    "location": {
      "type": "Point",
      "coordinates": [
    "locationAccuracy": 5,
    "confidence": 3

This is also just an example but it's fine for the Webhook profile.



Building the main process

Let's start a new process and configure the start shape as a "Web Services Server" then create a new operation.  These settings are important to make sure the process is executed correctly. The "Operation Type" and "Object" fields will be appended to the URL of our Atom where this process will be deployed to. The "Expected Input Type" should be "Single Data". We aren't sending back any data because there won't be anyone listening to receive it so we can leave the "Response Output type" as None.


Geofence Operation


Now drag in a decision shape and choose the Webhook profile we created earlier for the First Value field. Next choose the element that we want to make a decision based upon. Drill down through the Element field to the Event object and select "Type".


This object will either contain "user.entered_geofence" or "user.exited_geofence". I configured the rest of the shape so that the comparison was "Equal To" with the second value configured as a static value of "user.entered_geofence". Therefore whenever a user enters the geofence the decision rule is set to true and proceeds down the True path. Otherwise it will proceed down the false path.


Now let's add in an HTTP Connector to the True path. Leave the action as a Get. Choose our previously configured Virtual NEST Connector for the connection and then create a new Operation. Configure it as in the below image using the NEST JSON profile we created in the Configuring the HTTP Connector step.



Make sure to choose PUT for the HTTP Method here and also check the Follow Redirects check box. This is because the NEST API ingests data using the PUT method only and upon making a request to the API it will issue a 307 redirect which the process will need to follow.


Now select parameters and add a new parameter. Selecting the Input will show the objects for our NEST JSON profile. Drill down through it to your NEST simulators device and select a target temperature object. I chose "target_temperature_c" object and gave it a static value of 25. Probably a little too hot in reality...


Make a copy of this Connector to the False path and change the temperature parameter to a lower value such as 10.


Nest Temperature parameter

If all has gone well you should have a process that looks something like this. Remember it's always best practice to put a stop at the end of each route.


Geofence NEST Control Process


Deploying to a local Atom

We would typically want authentication configured on our Atom Shared Web Server however that won't be possible for this example. It isn't possible to configure to pass  our Atom credentials along with the Webhook data. As a result we need to configure it as a basic API Type with the Authentication Type set to "None". This also means that this process must live on a locally deployed Atom because the Atom cloud requires user Authentication when using the Shared Web Server.

*IMPORTANT* This isn't secure so keep this in mind. We could use the Security Token generated by the Webhook to make this more robust however I haven't implemented it here. Perhaps someone could offer up an example of this in the comments?


Navigate to the Manage tab in Atomsphere and select Atom Management. Choose the atom you are going to deploy this process to and then select Shared Web Server from the Settings and Configuration section. Configure your atom similar to the below. The Base URL will need to relate to your specific Atoms web addressable URL and listening port. Don't override unless necessary. If you need in depth help around this then have a look in the community. There's a lot of help there.


Shared Web Server

Take note of your Base URL here which we'll append with our Simple URL path from our Web Services Server Operation later.


Finally navigate to the Deploy tab and Deploy the process to your Atom.


Setting up

Registering for a account is very straightforward and the free edition provides enough functionality for us to achieve this integration.

Go to and fill in the details. Confirm your email to validate your account and login. Once logged in we need to create a Geofence to monitor and a Webhook to send data to when our user crosses the threshold of our geofence. We also need a mobile app to create a user and update with their location.  An SDK is available for developing your own mobile app. For this example I'm going to use the test app they provide.


Creating a Geofence

Login into and search for your location. Addresses and place names are supported. Once you've found your desired location it's just a case of setting the threshold of our geofence. We would likely want to activate the heating system when we are close but not too close.

I've used a 1km radius which fits well with me walking around the city. Now hit Create and your Geofence is set to go. I've used our office as an example.


Create new Geofence



Setting up a Webhook

The Webhook is configured in the Integrations section of the dashboard. Here we will input the URL of our Web Services Server.

Choose the same environment in which your Geofence exists (Test or Live) and select "Single Event". In the URL we want to input our Web Services Server URL from the Boomi process we created earlier and append it with the Simple URL path from the Web Services Service operation.


In my example the completed URL would be as below. Base URL from your Web Services Server and Simple URL from your Web Services Service Operation. Integration

At this stage we can hit Test and will send a sample JSON message to our process. You should see the temperature on your NEST simulator app change. If it doesn't then there's a problem. It's a good idea to troubleshoot at this stage to make sure that the Atom and process are configured correctly. We can also use Postman to test as I have in the video below.



Installing and configuring the test mobile app

As mentioned at the beginning there is an SDK available for developing your own app however the test app is really simple and will work just fine.

It is available for either IOS or Android. Once installed we just need to install the relevant publishable key.

These keys are available at 


My Geofence and Webhook resides in the Test environment of so I used the Test publishable key. It's worthwhile copying this to an email and sending it to yourself. It's lengthy. Copy it into the app on your mobile device and enable Tracking. If you look in you'll now appear with a user ID and device ID. You're being tracked! Now whenever this device crosses the threshold of your Geofence it will fire an event at our Boomi process which will then interact with our NEST thermostat and change the temperature.


Final thoughts

This is a basic use case using some fairly powerful Boomi shapes and connectors. There is much more we could do here to make this more interesting such as adding additional actions and connecting to multiple NEST devices at the same time. We could trigger lighting, music, all sorts of IoT devices that exist in the home, office or workplace.


In a more commercial use case we could use a similar method for monitoring deliveries, trucking, ships, you name it. Boomi processes can be triggered like this in many different ways to provide services and solutions that can streamline all kinds of scenarios out there.


Looking at this simple example really opened up my mind to the huge number of use cases there are when it comes to IoT and Geofences. The variety of ways in which we could utilize the data in these devices and how we could take action based upon their location.


I hope you've found this of interest and would love to hear your thoughts, suggestions and the use cases you have found which could be solved using Geofences and IoT devices and of course Boomi.

In our next Community Champion spotlight, we talk with Harikrishna Bonala.  Contributors like Hari make the Boomi Community a vital resource for our customers to get the most from our low-code, cloud-native platform.


How does Boomi address those common customer concerns?


Hari:  From a management perspective, Boomi has a very good user interface — very clean and neat — and we can easily show customers how simple it is to access their integrations. And, of course, Boomi is a true native-cloud, single-instance/multi-tenant platform. Plus, Boomi offers hundreds of application connectors to make integration plug-and-play.



Read the full interview here:  Learning Through Experience: Q&A With Boomi Community Champion Hari Bonala.


Look for more interviews with Community Champions coming soon!


Avro objects

Posted by teemu_hyle Employee Jun 4, 2018

This article describes how to create Apache Avro™ object from JSON files.
For this example there is used Apache Avro™ version 1.8.2


The use case behind this is to send Avro objects to message systems like Kafka.
This example creates Avro objects from JSON files and send Avro objects via a connector.
The same logic could be implemented in custom connectors with Dell Boomi connector SDK.


Download Avro tools

Download avro-tools-1.8.2.jar from


Create Custom Library

Upload avro-tools-1.8.2.jar via via the Account Libraries tab on the account menu's Setup page.
Create Custom Library with avro-tools-1.8.2.jar and deploy it to your Atom.

Define Avro schema

There is used following schema example from
JSON profile can be created to Boomi for this schema, and Avro schema could be validated during runtime but it is not necessary.


    "namespace": "example.avro",
    "type": "record",
    "name": "User",
    "fields": [{
        "name": "name",
        "type": "string"
        }, {
            "name": "favorite_number",
            "type": ["int", "null"]
        }, {
            "name": "favorite_color",
            "type": ["string", "null"]

Input JSON files

For this example there is used two JSON files that are placed in SFTP Server.


{"name": "Alyssa", "favorite_number": {"int": 256}, "favorite_color": null}


{"name": "Ben", "favorite_number": {"int": 7}, "favorite_color": {"string": "red"}}

Create Process

There is used following process to read input JSON files and Avro schema. Avro schema is stored in to Dynamic Process Property.
Data Process step and Custom Scripting is used to create Avro object from JSON.
One Avro object is created for multiple JSON files.
In this example the Avro object is sent to SFTP Server, but it could be sent to message system like Kafka.

Data Process / Custom Scripting (Avro)

import java.util.Properties;
import com.boomi.execution.ExecutionUtil;
import org.apache.avro.Schema;
import DatumReader;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.generic.GenericData;
import org.apache.avro.generic.GenericDatumReader;
import DatumWriter;
import org.apache.avro.file.DataFileWriter;
import org.apache.avro.generic.GenericDatumWriter;

// Avro schema
String schemaStr = ExecutionUtil.getDynamicProcessProperty("AVRO_SCHEMA");
Schema.Parser schemaParser = new Schema.Parser();
Schema schema = schemaParser.parse(schemaStr);

// Avro writer
DatumWriter<GenericRecord> datumWriter = new GenericDatumWriter<GenericRecord>(schema);
DataFileWriter<GenericRecord> dataFileWriter = new DataFileWriter<GenericRecord>(datumWriter);
ByteArrayOutputStream baos=new ByteArrayOutputStream();
dataFileWriter.create(schema, baos);

for( int i = 0; i < dataContext.getDataCount(); i++ ) {
    InputStream is = dataContext.getStream(i);
    Properties props = dataContext.getProperties(i);

    // Input JSON document
    Scanner s = new Scanner(is).useDelimiter("\\A");
    String jsonString = s.hasNext() ? : "";

    // JSON to Avro record
    DecoderFactory decoderFactory = new DecoderFactory();
    Decoder decoder = decoderFactory.jsonDecoder(schema, jsonString);
    DatumReader<GenericData.Record> reader = new GenericDatumReader<>(schema);
    GenericRecord genericRecord =, decoder);



// Output document (Avro)
dataContext.storeStream(new ByteArrayInputStream(baos.toByteArray()), new Properties());

Test run and validation

The process is tested and Avro object is validated using "Deserializer".




Below is a code for "Deserializer" to validate Avro object.

import org.apache.avro.file.DataFileReader;
import org.apache.avro.file.DataFileWriter;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.generic.GenericDatumReader;
import org.apache.avro.Schema;

class Deserialize {
    public static void main(String args []) {
        try {
                  Schema schema = new Schema.Parser().parse(new File(args[0]));
                  File file = new File(args[1]);

                  System.out.println("Avro Deserializer");
                  DatumReader<GenericRecord> datumReader = new GenericDatumReader<GenericRecord>(schema);
                  DataFileReader<GenericRecord> dataFileReader = new DataFileReader<GenericRecord>(file, datumReader);
                  GenericRecord record= null;
                  while (dataFileReader.hasNext()) {
                     record =;
        } catch (Exception e) {

Today, Dell Boomi is announcing the Spring 2018 release of the Boomi platform. This release showcases more than 100 new and enhanced capabilities made generally available to Boomi customers in the first half of this year, including more than 20 features that were requested by Boomi customers (Thank you!).


Spring 2018 Release: Overview

This release includes new and enhanced features to every element of the Boomi platform. Collectively, these features help further simplify Boomi’s user experience, drive developer productivity and facilitate building low code workflow applications. It also showcases Boomi’s support for IoT Integration.


Boomi for IoT

With this release, Boomi unveils the general availability of its IoT solution. This solution has been available to Boomi customers for the last year, and provides the ability to automate business processes and deliver an engaging experience via web, mobile, or any other interfacing technology to achieve tangible business outcomes using device data. 


Features include:

  • Edge device support - IoT integration patterns require seamless integration of device data, application data and people. With the patented Boomi Atom, customers can perform integrations, and host and manage web services, anywhere; on-premise, in clouds, or edge devices.
  • IoT Protocol support - Boomi provides connectivity to a vast array of cloud and on-premise applications, as well as devices and device data through support for a variety of IoT connectivity protocols, including open standards such as AMQP, MQTT and REST.
  • Workflow Automation for IoT - Organizations can use Boomi Flow to create business processes that respond to certain alerts or triggers as appropriate and allow for human intervention or decisions.
  • Boomi Environment Management  IoT deployments may encompass multiple edge gateways performing common application integration functions to support a multitude of devices across a large landscape. With Boomi Environment Management, users can centrally manage application integration processes, and automatically synchronize changes across all Boomi Atoms, and associated gateways that belong to an environment.


 Integration Accelerators

Since its inception, Boomi has focused on providing ease of use and tools for accelerating building and managing integration projects. This release includes UI enhancements, simplified reporting to empower data governance specialists, new and improved connectors and connector SDK, as well as the ability for Boomi users to showcase their work and contribute to Process Libraries.

To learn more, please watch the video below:


Developer Productivity

To drive greater developer productivity, Boomi has introduced new features to automate and streamline support for enterprise-scale use cases, thereby reducing the complexity associated with IT operations. Highlights include Packaged Deployment to simplify management of user developed integration artifacts, new authentication capabilities for API management, support for importing Swagger definition or WSDL from publicly facing URL as well as enhanced support for SharePoint in Boomi Flow.

To learn more, please watch the video below:



To learn more about the Partner Developed connectors, please visit the Boomi Blog.

You can also read the Press Release here. 

A year and a half ago, by popular request, we launched our Process Library built into the Boomi platform to provide examples, templates and ‘how-tos’ to help you create your integration solutions faster. This was in response to your ask “just show me an example!”


The Process Library proved valuable to you all, and since then we have been thinking about how to help our ecosystem of customers, OEM, channel partners and system integrators showcase the templates and examples you have built.

Today we are excited to announce the ability for you -- members of the Boomi Community -- to publish and share your examples and templates across the Boomi network.


If you are a Boomi expert, an OEM partner or an Systems Integrator partner, and you are looking to share your Boomi asset to create more visibility for the work you have done, you have come to the right place...Community Share.


The Community Share allows you to:

  • Share how-to examples and templates
  • Give the community visibility to your expertise
  • Provide access to complex processes partners may have built
  • Search across all assets, whether provided by Dell Boomi, a partner, or other Boomi experts


The Community Share Mission

These improvements are part of our broader vision to make Boomi how-tos and examples more complete, and have those available to share within the community.


1. Power of Boomi Ecosystem

Recognizing the amazing champions and expertise within the larger Boomi ecosystem, we want to provide a place for you to showcase and share your diverse expertise within the Boomi Community.


2. Catered to You

It’s directly providing the insight to your customer or stakeholder when you are adding
to what Boomi provides. You can now feature your work directly inside the Boomi platform. You can also leverage additional examples contributed by the community and increase time to value for implementing your ideas.


3. Simplicity

In few clicks, you can easily provide access to your work, or find answers, templates and samples from experts. We are one community and Community Share provides a common platform for sharing integration assets.


We believe unlocking the community fuels more innovation and drives ideas on what is possible with the power of you!


Visit Community Share to see what’s available and instructions for how to share your examples.

Thameem Khan

Boomi Enabled Slack Bot

Posted by Thameem Khan Employee May 9, 2018

It's been some time since my last blog. I have been slacking all the while and I wanted to share my experience of building a slack bot. As enterprises start looking at more and more automation, BOTs will play a critical role. BOT platforms provide a sophisticated speech recognition and natural language understanding, which enables efficient UI/UX. But, these BOTs need to interact with other applications to serve data to the end user. This is where iPaaS (Boomi) plays a key role. iPaaS enables BOTs to interact with applications where the data actually resides.


The below video will walk you through an example BOT's functionality, architecture and how Boomi adds value. Hope you find it interesting and come up with more interesting BOTs. Please feel free to reach out to me and as always, comments/suggestions are welcome.




Amazon Lex – Build Conversation Bots 

AWS Lambda – Serverless Compute - Amazon Web Services 

Integrating an Amazon Lex Bot with Slack - Amazon Lex 

Join AWS for Live Streaming on Twitch 


Thameem Khan is principal solutions architect at Dell Boomi and has been instrumental in providing thought leadership and competitive positioning Dell Boomi. He is good at two things: manipulating data and connecting clouds.

I was recently given a challenge to integrate Banner (LINK) data and transform it to an EDI file, specifically a 4010 - 130 Student Educational Record / Transcript (LINK).  Upon researching the Banner structure provided, I figured it would be a good community post for handling complex flat files which contain positional records and have cross record relationships with looping contained therein.


There are many different ways to handle this so I'm providing what I found to be most logical.  I look forward to your feedback on approaches you may have taken in the past to accomplish similar scenarios and I'll try my best to reply to comments or questions as they are posted.


Lets get started:


Many Higher Education organizations use Ellucian Banner for their Student Information System.
Though the sample implementation is Ellucian Banner specific, the same concepts for the Flat File profile may be applied to any positional file which needs to maintain relationships between records - think 'hierarchy'.  Also, though the end target system is EDI with specific segments, like everything in Boomi, the target destination can easily be swapped out for other systems (JSON, XML, DB etc.).



Reference Guide Articles

Here are some links to our Reference Guide, which you may find useful when using and configuring this scenario.

  • Multiple Record Formats - Profile (LINK)
  • Relationships between flat file record formats (LINK)
  • FF Profile Elements (LINK)
  • EDI Profile Identifier Instances (LINK)


Scenarios on How to Use Flat File to EDI

Scenario 1 - How do I configure a Flat File Profile which is positional AND contains hierarchies?

Below is a screen shot of of a sample document attached which was provided by Banner.

Items of note (from top to bottom):

  • BLUE box: a repeating element with the first two characters "RP" as the identifier when OUTSIDE of the S1 hierarchy
  • RED box: Starts with the S1 identifier and continues until the next S1 identifier
    • YELLOW highlight box: contains information related to S1 parent.  This is a mix of individual lines (S2 & SUN) as well as sub-hierarchies (C1, C2 & RP)
    • LIGHT BLUE box: individual courses taken with additional information contained in the S2 line and optional RP line
  • GREEN box:  repeating element with T1 -> SB -> TS relationship
  • ORANGE box: Identification items for the student's transcript (name, address, DOB etc.)
  • When breaking down the individual lines, the file is POSITIONAL with specific start / end character locations which define the specific record components
  • Understanding the FF relationship between records is key to correctly defining the FF record profile

Using the same color coding on the screen cap, the image below shows the FF Profile hierarchy configuration.

Overall profile options were setup for "File Type" = "Data Positioned" (others were left as default)

   Options configuration for the FF profile

Data Elements were created, and configuration for "Detect This File Format By" was set to "Specified unique values"


Positional Start Column, Length & Identity Format configuration was set for each level and element:

For each of the fields identified, configuration was setup 

Scenario 2 - How do I work with packed number formatting?


One of the fields within the source flat file contained 6 digits ("006000"), but in reality it represents a decimal value with the last three characters being the implied decimal location.

Part of the profile options allow you to specify the Data Format, in which you can say it's a Number type and an "Implied Decimal" value.  This will auto-convert that field to 006.000 for the output map without having to do any additional math function (my first attempt was to divide by 1000).

Implied Decimal format for packed value


Scenario 3 - Mapping the FF profile to EDI (4010-130 - Transcripts)

Though the Boomi Suggest will map many of the fields based on past customer maps, there maybe a few items in the EDI profile that need to be configured based on your needs.

For me, I needed to create two identifier loops for N1 (N101=AS and N101=AT) to accomplish the desired output and simplify my mapping.  Visit this LINK if more information about the identifier instances is required.

EDI Identifier Instances

Some of the target EDI loops also needed to be adjusted for the "Looping Option" to be set to "Occurrence" as opposed to the default "Unique" selection in order for the output results to next as expected.


Common Errors

Error: Data is not being presented as expected

Sometimes the source profile needs to be adjusted based on how the actual data is flowing through and where the record identifiers are placed.  Make sure the data is represented in the profile the same way as the FF itself.  You can drag / drop elements and records between levels in the profile.  I would also suggest targeting an XML or JSON profile you manually create to test out the data (easier to see the results than in an EDI file).