Skip navigation
All Places > Boomi Buzz > Blog
1 2 3 4 Previous Next

Boomi Buzz

55 posts

Dell Boomi provides numerous options to architect your runtime platform. You can place your runtime engine in Dell Boomi cloud, private cloud, locally, and even in a hybrid environment. There are different runtime configurations such as a single Atom, a Molecule, and group of Atoms to support various customer requirements. Out of those runtime options, Molecule is the most efficient way to support high volume with scalability. As companies are moving their applications and infrastructure on various cloud providers such as Amazon, Azure, and Google Cloud, it is mandatory to keep Boomi runtime at one of those cloud vendors. With Dell Boomi, it is very quick to install Boomi runtime into any cloud infrastructure. This blog is going to explain in detail the steps involved to install a Boomi Molecule runtime on Google Cloud. The steps involve provisioning of VMs (nodes) and a shared file server on the Google Cloud Platform followed by the installation of a Dell Boomi Molecule on the shared file server.

I shall outline the process of setting up a Molecule on the Google Cloud Platform by using free Google Cloud offerings. The complete process may vary according to requirements of a customer based on various factors such as CPU load, I/O, RAM, storage type, storage amount, and network security. A basic understanding of the Boomi Platform, Cloud concepts, and Linux commands is sufficient to complete this setup. Initial steps will focus on setting up Google Cloud environment followed by installation on Boomi Molecule runtime.

I look forward to your feedback and will do my best to efficiently respond to each comment or question.



Create a Shared File Server

  • Click on “Cloud Launcher” to look for single node file server VM. There are many file servers available on Google Cloud and “single node file server” will be used as a shared file server for this setup. Refer to all file servers available on Google Cloud.

  • Search for “file share” and select “Single node file server”

  • Click on “LAUNCH ON COMPUTE ENGINE” to deploy the file server.

  • Modify the below screen as per requirements. Don’t forget to keep a note of the region that has been selected. The same region should be used while creating VM instances. Click “Deploy” after all configuration is complete. Refer to single node file server for more information.

  • The next screen will highlight the basic information regarding the file server such as OS, associated software, and access details. Make a note of the mount command as it will be helpful in mounting the shared storage from the nodes.

  • The shared file server has been created.


Create a Node in Google Cloud

  • Open VM instances.

  • Click “Create” in VM instances

  • Complete the below form as per the requirements and click “Create”.

Note: The VM can be configured as per your requirements or guidelines from the customer such as OS, Zone (should be same as Shared File Server’s zone), firewall etc.

  • Once the instance has been created, it will appear as below:


Installing Molecule on Google Cloud

  • Open node1 via SSH by the feature provided in Google Cloud.

  • A new browser window will open showing an SSH connection to the VM. There are other ways to connect to Google Cloud such as gcloud command, PuTTY, etc.

  • Run the below commands to set up the VM instance
    • To update an existing package
  • sudo yum update
  • Make below directories to install molecule
  • sudo mkdir /mnt/data
  • sudo mkdir /mnt/data/molecule
  • Mount the shared file server to node1
  • sudo mount -t nfs google-cloud-boomi-file-share-vm:/data /mnt/data
  • Create a new user and make it the owner of the shared file server location
  • sudo useradd -u 510 boomi
  • sudo ln -s /mnt/data/molecule /usr/local/boomi/molecule
  • sudo chown boomi:boomi /mnt/data/molecule
  • sudo chown boomi:boomi /usr/local/boomi/work
  • Download and install JDK by using the command
  • Download Molecule installer and run it as “boomi” user

[boomi@google-cloud-boomi-node1 tmp]$ ./
Starting Installer ...
This will install a Molecule on your computer.
OK [o, Enter], Cancel [c]

Enter your user name and password or a token and supply a name for your Molecule.
Use the email address and password that you use to sign into your Boomi
AtomSphere account or select Token to use a Boomi-generated token.
User Name and Password [1, Enter], Token [2]

User Name

Molecule Name
The following entries are required if the installation machine requires a
proxy server to open the HTTP connections outside of your network.
Use Proxy Settings?
Yes [y], No [n, Enter]
Logging into Boomi.
Authenticating credentials
Choose an account.
Please select the account with which this Molecule will be associated. A
Molecule can only be associated with one account.
Boomi XXXX Account1 - [1]
Boomi_Sunny Bansal - [2]
Boomi XXX account 3 - [3]
Choose an installation directory.

Please select the location of the local and local temp directories. Both locations must be on the local disk and must have sufficient disk space for the Molecule's operation.
Local Directory
This setting sets the Working Data Local Storage Directory property
(com.boomi.container.localDir in the file). For more
information, see the User Guide.

Local Temp Directory

This setting overrides the default Java local temp directory (
in the atom.vmoptions file).
Create symlinks?
Yes [y, Enter], No [n]
Please read the following important information before continuing.

The following will be installed when you choose to continue:

Molecule - google-cloud-boomi-demo
Installation Directory - /mnt/data/molecule/Molecule_FS_Demo
Local Directory - /usr/local/boomi/work
Local Temp Directory - /tmp
Program Group - Boomi AtomSphere\Molecule - google-cloud-boomi-demo


Retrieving Build Number
Extracting files...

Downloading Atom Files
Downloading Molecule Files
Retrieving Container Id
Retrieving account keystore.
Retrieving account trustore.
Configuring Molecule.
The Molecule has been installed on your computer.
Molecule Name: google-cloud-boomi-demo
Preferred JVM: /usr/java/jdk1.8.0_161
Finishing installation...

  • Login to the Atomsphere platform and the new Molecule will appear.

  • Go to properties and add properties in the Advanced tab. Atom will be re-booted upon save.
    The format of Initial Hosts for Unicast is[7800].

  • Installation of the first node is finished.


Adding Additional Nodes

  • Create a snapshot of existing node1.

  • Create a new instance from the snapshot

  • Change the OS and select the “Snapshots” tab to choose the snapshot previously created.

  • All VM instances will show below.

  • Open node 2 via SSH.
    • Run the below commands.
  • sudo mount -t nfs google-cloud-boomi-file-share-vm:/data /mnt/data
  • start node2 atom
  • sudo su boomi
  • ./atom start


Key Points

  • The commands may vary according to the different OS’s you choose while creating VM instances AND Shared file server.
  • The packages such as apt and yum may vary based upon the OS you choose. E.g. for RHL, the yum package is used but for Debian, the apt package will be used.
  • Always run all the commands as sudo.
  • It is preferable to create a snapshot of the VM instead of creating a new VM from scratch. This will reduce the possibility of human error.
  • Refer to the logs created at /mnt/data/molecule/Molecule_FS_Demo/logs location, this will help to debug the issues related to molecule setup.
  • Modification to the file at the NFS server might be necessary to allow the NFS client to change the owner of the NFS location.

Note: command “sudo chown boomi:boomi /mnt/data/molecule” might fail if the below changes are not done in the NFS server file system.
File location: /etc/exports
Old content:
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)

New content:
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)



The Open Data Protocol (OData) is a data access protocol built on core protocols like HTTP and commonly accepted methodologies like REST. The protocol enables the creation and consumption of REST APIs, which allow clients to publish and edit resources, identified using URLs and defined in a data model, through the mapping of CRUD (Create, Read, Update, Delete) operations to HTTP verbs.



Resource Operation

OData leverages the following HTTP verbs to indicate the operations on the resources.

  • GET: Get the resource (a collection of entities, a single entity, a structural property, a navigation property, a stream, etc.).
  • POST: Create a new resource.
  • PUT: Update an existing resource by replacing it with a complete instance.
  • PATCH: Update an existing resource by replacing part of its properties with a partial instance.
  • DELETE: Remove the resource.


Basic operations like Create (POST) and Read (GET) obviously do not pose any challenges around concurrency. It is perfectly alright to have multiple concurrent transactions creating or reading resources or individual entities. However, it is different for updating and deleting.


Concurrent updating of a resource by two transactions that try to update the same resource at the same time can lead to problems like 'the lost update problem' - the second transaction will overwrite the update made by the first transaction and so the first value is lost to other concurrently running transactions that need to read the first value. These transactions will read the wrong value and will end with incorrect results.


OData V4 and Concurrency

OData V4 supports ETag for Data Modification Request and Action RequestThe ETag or Entity Tag is one of several mechanisms that HTTP provides for e.g. web cache validation, which allows a client to make conditional requests. This allows caches to be more efficient, and saves bandwidth, as a web server does not need to send a full response if the content has not changed. ETags can also be used for optimistic concurrency control as in the case of OData, as a way to help prevent simultaneous updates of a resource from overwriting each other.


An ETag is an opaque identifier assigned by a web server to a specific version of a resource found at a URL. If the resource representation at that URL ever changes, a new and different ETag is assigned. Used in this manner ETags are similar to fingerprints, and they can be quickly compared to determine whether two representations of a resource are the same.


From the OData V4 Protocol Specifications (Ref. OData Version 4.0. Part 1: Protocol Plus Errata 03 ): Use of ETags for Avoiding Update Conflicts
If an ETag value is specified in an If-Match or If-None-Match header of a Data Modification Request or Action Request, the operation MUST only be invoked if the if-match or if-none-match condition is satisfied.

The ETag value specified in the if-match or if-none-match request header may be obtained from an ETag header of a response for an individual entity, or may be included for an individual entity in a format-specific manner.


So for every update we must realize that the ETag value may be out-of-date, so when you try an update request, always first use the GET request to get the ETag of the specified entity.


Boomi OData V4 Connector

In the latest release of the OData Client Connector concurrency is now supported [V4 Only] through the mechanism as described above. Both headers if-match and if-none-match have been added as a configurable option 'Concurrency Mode' in the OData Client Operation. The Concurrency Mode determines how any supplied ETags should be evaluated. Furthermore, ETag support has been added for the OData Client Connector as a Document Property.



Example Use Case - Microsoft Navision 2017

As part of a full Boomi platform POC that involved our Master Data Hub (or 'Hub') with sources Salesforce, SAP and Navision (2017) we implemented the typical Channel Updates synchronization process by taking the Process Library synchronization templates and specifying these for the various sources. For the Channel Updates (for the 'Contact' Domain) that need to be updated in Navision we end up with the following typical (POC) 'CUD' process:



When we zoom in on the Map shape doing the transformation from the Hub (MDM) XML profile - fetched as part of the channel updates - to the JSON profile that we need to construct as part of the OData Update Request we see the following:



Naturally we recognize all the typical field mappings for the Contact domain as defined by us in the Hub on the left-hand side to Navision on the right-hand side. More important in the light of this article we further zoom in on the 'editLink and ETag' function that will build the editLink and will get the ETag for the Entity at hand and set the ETag Document Property accordingly.



We use a Connector Call as part of the second branch in this function to retrieve the ETag for the Entity we want to update and set the ETag Document Property accordingly:



Now when a Data Steward makes a change in one of the 'Contact' Golden Records - perhaps to solve a Data Quality Error - a channel update for Navision will be pending (as in this case Navision is setup as a source that will accept channel updates):



Now, the Data Steward changes the Phone number for the 'Contact' Golden Record in the Master Data Hub from +31243880000 to +31243882222...



As said, this generates Channel Updates including one for Navision:



When we now execute the Fetch Channel Updates process for the Contact Domain in AtomSphere we can see the following...

First of all, we see the Channel Update going down the expected path or branch - the one that takes care of updates:



Second of all we can inspect the response document to see the update was actually applied!



Also, inspecting the Atom logs, we can see how indeed the ETag nicely got applied in the If-Match header as configured as part of our OData Client Connector Operation!


sendRequestHeader] >> PATCH /xxx_xx9999nl_a/ODataV4/Company('Acceptatie%20Xxxxxxxx%20Xxxxxx')/Contact('C00004') HTTP/1.1
sendRequestHeader] >> Accept: application/json;odata.metadata=full
sendRequestHeader] >> Content-Type: application/json;odata.metadata=none
sendRequestHeader] >> OData-MaxVersion: 4.0
sendRequestHeader] >> OData-Version: 4.0
sendRequestHeader] >> Content-Length: 206
sendRequestHeader] >> Host:
sendRequestHeader] >> Connection: Keep-Alive
sendRequestHeader] >> User-Agent: Apache-Olingo/4.3.0.boomi2
sendRequestHeader] >> Authorization: NTLM XXXXXXXXX==



Ref. Some of the basic statements around OData as a standard were taken from and

Boomi MDM has a new name! At its core the application is a master data integration platform, and so with this release, the application is renamed Master Data Hub — Hub, for short.


Boomi defines synchronization as the action of causing a set of trusted records to remain identical in multiple sources. Many Boomi customers are solving master data challenges today by connecting systems and integrating data. But as Chief Product Officer Steve Wood recently noted in this blog post, "We know from listening to our customers that just connecting siloed cloud or on-premise data isn’t enough. You need to manage your data quality and synchronization among applications."


With Hub’s synchronization capability, we enable customers to standardize the nouns of their business (Customers, Products, Employees, Vendors, etc.) and ensure they are accurate, consistent and complete. And as systems or users with varying levels of trust update that information, we ensure these "golden records" comply to business standards before propagating those updates to many downstream systems or business processes. All of this is enabled through the Boomi platform's unprecedented ease of use and reusable features that support quick time to value.


It is important to note that the practice of Master Data Management (MDM) is still supported on the Boomi platform. The Boomi MDM connector supports the integration of data with the Hub and the MDM Atom Clouds support the setup and maintenance of master data repositories. We will continue to enable and enforce MDM and Data Governance programs. There is no change to the direction in which we are synchronizing. 


And with the power of the entire Boomi platform, we can extend these practices even further. Our vision is to leverage the Master Data Hub as a force multiplier for the connected business. As you are integrating with file systems and applications OR enabling data on-boarding and multi-step approval workflows, you may realize that a hub architecture may best fit your use cases so you can pivot these critical data sets more broadly across the business. The Master Data Hub will continue to reflect these design patterns and drive more insights to ensure you perform this quickly and safely.


If you are an AtomSphere Process Developer and want to explore examples with Master Data Hub, please review the updated Getting Started Guide.


We look forward to your feedback and ideas as we continue to embark on this synchronization journey!

In the past you may have gotten a vague "Internal Server Error" in Flow when querying an AtomSphere process.  However, those days are behind you as you can use AtomSphere's ability to use try/catch mechanisms and pass the value back to Flow in one fell swoop!  To demo, we'll show you how a shortURL creation tool can determine if the URL you submitted is valid.  The Flow looks like this: 


1. Your AtomSphere process will look something like this: 


2.  The two additional fields you'll add in your AtomSphere Response Profile (in the Listen to Flow entry) looks like this: 


3.  The magic happens after a Try/Catch mechanism detects that your URL wasn't formed correct, and goes down the "Catch" path.  In the Map element, you make a Function that sends back the Error message received from the Try/Catch, as a Response back to Flow, like this: 


4.  Map that response back in Flow in your Message Shape:


5.  ...and get your error message neatly aligned on your Flow Page Shape!

In our next Community Champion spotlight, we talk with John Moore.  Contributors like John make the Boomi Community a vital resource for our customers to get the most from our low-code, cloud-native platform.


What do you like most about working with Boomi?


John:  It forces me to be an engineer — more than a programmer or developer. Anybody can learn a programming language — come in and just spend seven hours implementing something that’s already designed.


But for integration architecture, it’s about the big picture.


Read the full interview here:  Big Picture Integration: Q&A with Boomi Community Champion, John Moore.


Look for more interviews with Community Champions coming soon!

Hi all,


I wanted to share a quick tip about how to spice up your Flows with a personal adding a custom component to capture a user's signature (with their finger or mouse) right within your Flow.


Check out the end result HERE!


  1. It’ll look like this in simple form on the canvas:
  2. Download the attached "signature_player.html" file. Create a new Player in your account, copy and paste the contents, and save.
  3. Download the attached "Signature Pad Flow.json" file. Import it as a new Flow in your account.


The Signature Pad is not a default component, so it needs to be added to the Page Component's metadata. The import above will do this for you of course but here's a look behind the scenes or in case you like to do things "by hand".

  1. In a Page, drag on a “Presentation” component, save it the page, then open Page's Metadata editor and change that component_type to “signature_pad”, and save again.
  2. This is key line in the player you’ll note needing to add:

  3. Bada-Boomi!, you should be able to sign inside a Flow, like this:

Whenever you create an API in Boomi, you have multiple options of securing the API through the means of authentication. First of all we have (Atom Controlled) 'Internal Authentication' which offers several options like Basic Authentication, Client Certificates and Custom Authentication through JAAS Modules. Furthermore, we have the option of using 'External Authentication' through an External Identity Provider allowing you to have the Identity Provider handling the authentication based on either SAML or OpenID Connect (OAuth).


Securing API's through External Authentication involves configuring an Authentication Source like Okta and also installing and configuring an Authentication Broker within API Management. How to build an API and setup External Authentication is beyond the scope of this article--for that, check out Getting Started with External Authentication for API Management .


Within API Management we can register and configure Applications. Users of API Management create Applications which serve to allow another entity, such as a business unit or third-party application, to access the API. The API Key that is generated for each authorized API is what allows the entity access to that particular API. For each Application that wants to use a particular API, the API Key has to be provided for each request. Authenticating the user using the application through e.g. OpenID Connect will be brokered to the Authentication Source for authentication. The Authentication Broker will issue and subsequently validate the Access Token also for each request. This whole flow is outlined in the Getting Started article linked above.


In the case of OpenID Connect also an ID Token is issued by the Authentication Broker. The main difference between the two type of tokens is:

  • The Access Token is a token that can be used by a client to access an API. Sending the token as an Authorization header with a request informs the API that the bearer of the token has been authorized to use the API. As such it is meant for authorizing the user to a resource server (API).
  • The ID Token is a token containing identity data. It is consumed by the client and is used to get user information like a user's name, email, etc., typically used for display in the UI. So the ID Token is meant for authenticating the user to the client and should not be used to obtain access to any resource (API) or make authorization decisions.


In this article we will explore how we can leverage the claims (statements about an entity) embedded in the Access Token to be used further down the linked API process, e.g. for taking decisions, transformations, masking, routing, etc.


The Access Token that is issued by the Authentication Broker is created as a JSON Web Token (JWT) and as said once a client has obtained this Access Token, it will include that token as a credential when making API requests in the form of an Authorization header using the Bearer schema. The content of the header might look like the following:

Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJSU...<snip>...RQuTfPL7r_RhM_nND4x3x5u3QsAxuMtciqJ4z_ZVSYLHf1C_g


JWT's generally consist out of three Base64Url encoded parts, each separated by a '.' (period): a header, a payload, and a signature:



The header identifies which algorithm is used to generate the signature. An example header as issued by our Authentication Broker looks like this:

   "alg": "RS256",
   "typ": "JWT",
   "kid": "PMiuZZmryTRDlTbdLWuDlVExT57eDL_O9a61tWRLvdo"


What is more interesting in the lights of this article is the middle part or payload as the payload is containing the claims. As said before, claims are statements about an entity (e.g. the user using the application) and additional metadata. An example payload (as issued by our Authentication broker) looks like this:

   "jti": "b9bff6d0-eb80-4fe2-adda-a65b7aadd52d",
   "exp": 1519075018,
   "nbf": 0,
   "iat": 1519074718,
   "iss": "",
   "aud": "6fcc0833-0ec2-4542-bc2c-4cd88286dfc8:OPENID",
   "sub": "280c607e-f3b7-4d85-a0ff-f7ead94ff3be",
   "typ": "Bearer",
   "azp": "6fcc0833-0ec2-4542-bc2c-4cd88286dfc8:OPENID",
   "auth_time": 1519074692,
   "session_state": "96d80fec-cc22-418a-a6fa-481b28d80dca",
   "acr": "0",
   "client_session": "b64b031a-80e6-474c-b931-6e157601b1b1",
   "allowed-origins": [],
   "realm_access": {
      "roles": [
   "resource_access": {
      "broker": {
         "roles": [
      "account": {
         "roles": [
   "address": {},
   "email_verified": false,
   "name": "Rene Klomp",
   "groups": [
   "preferred_username": "",                                                            --- Hidden to prevent spam
   "identity_provider_alias": "Okta",
   "given_name": "Rene",
   "family_name": "Klomp"


In the token you can find a number of claims where the first seven are actually registered claims (according to RFC 7519 - JSON Web Token (JWT) ) such as:

   "jti": "b9bff6d0-eb80-4fe2-adda-a65b7aadd52d",                                                   --- JWT ID Claim
   "exp": 1519075018,                                                                               --- Expiration Time Claim
   "nbf": 0,                                                                                        --- Not Before Claim
   "iat": 1519074718,                                                                               --- Issued At Claim
   "iss": "",  --- Issuer Claim
   "aud": "6fcc0833-0ec2-4542-bc2c-4cd88286dfc8:OPENID",                                            --- Audience Claim
   "sub": "280c607e-f3b7-4d85-a0ff-f7ead94ff3be",                                                   --- Subject Claim


As an example, 'iat' stands for 'Issued At' and is giving the time at which the token was issued in NumericDate format which is "a JSON numeric value representing the number of seconds from 1970-01-01T00:00:00Z UTC until the specified UTC date/time, ignoring leap seconds - Ref. RFC 7519 - JSON Web Token (JWT) ".

So if you do the math for our example you can find that this particular token was issued at 2/19/2018, 4:11:58 PM EST.


Now we are more interested in the public claims like group memberships for the user as they are contained within this JWT. We have setup our External Identity Provider Okta to include all user attributes as defined on the app profile and the groups claim in our example case is filtered according to a regex filter to contain all groups (groups Regex .*):



As can be seen from the Okta Administration Console below the user is indeed in the 'Everyone' and the 'Consultant' group and this is nicely reflected in the groups claim that can be found in the Access Token above!



Having access to the Access Token and its embedded claims downstream in the linked API process opens up a whole new way of dynamic processing and decision taking. Now how do we get access to these claims in the API process?


First of all, to get access to the Access Token in the API process is by adding the 'Authorization' header to the list of 'Dynamic Document Property Headers' on the General API Configuration tab:



This way the Authorization header can be accessed in the underlying API process as the Dynamic Document Property (DDP) 'inheader_Authorization'. Now it is easy to extract the relevant claims embedded in this header representing our JWT access token either by using a Data Process shape with as a first step a regular expression to take out the payload and then a second BASE64 Decode step to decode into JSON...



...or as part of a Map shape where we can use a String Split...


...and 2 lines of Groovy script to BASE64 Decode our JWT...



In the sample service enabled process below that exposes patient information inside a database as a REST API I have made the testing for a groups claim containing 'Staff' very explicit as part of the first branch. Again, there are many ways you can implement this and this is just one way of doing so.



In the second branch we get the data for a certain patient based on a patient id that we send as a query parameter as part of the request and we do the transformation into JSON to return in the response. It is in the Map shape that we check whether the property inStaff is set to true or false in the first branch based on the group membership claim. In case the property inStaff is false we obfuscate the 'SSN' field in our example...



The function is a very simple two-step function...



where the script is as simple as this:


val_out = (authorized=='true') ? val_in : '#########'


Now, let's see this in action!


As an example I have built an Android App that is consuming our 'Patient API'. The API is protected by Okta based on OpenId Connect as described above and the App is registered in Boomi API Management. The App is authorized to use the API with the API Key that is generated as part of the App registration and API authorization in Boomi API Management...



First of all, if we login as user 'Rene Klomp' who is not a member of Staff, we see that we need to authenticate using username and password with Okta (screenshot 1 below). Furthermore, when logged in we see at the top of the App that we can present any claims embedded in the ID Token in the UI. In this case we present who is logged in into the App by using the name claim. If we request the information for a patient with patient id = 1234 and scroll down to find the SSN we see (as expected - screenshot 2 below) that it is obfuscated as this user is not a member of Staff. When we logout and log back in again as a user who is a member of Staff ('John DiStasio') and scroll down again we can see the SSN is clearly visible - again, as expected (screenshot 3 below)!




Summarizing we have seen how claims inside the Access Token can be used within the API process to perform some conditional processing or logic. As an example we have seen how we can hide or obfuscate a certain field as part of the response based on a certain group membership of the user registered in the external Identity Provider. This is just an arbitrary example and there are of course many possible applications of using these claims within the API process!

Hi Boomi users and enthusiasts,


My name is Andrew Mishalove (you can call me Andrew) and I am super excited about my new role here at Dell Boomi.  I am just beginning my third week and wanted to introduce myself to all of you.  My official title is Technical Community Manager and Strategist.  My promise to you is to carry on and further develop the great work started by Adam Arrowsmith.  If I had to boil that down to a single statement, I would say it is my intention to ensure that the Boomi Community becomes the "One Stop Shop for all Things Boomi" and that it will be central to your journey as a Boomi customer.  I plan to continue with Adam's accomplishments and further develop everything you have come to love and enjoy, such as a robust Knowledge Center, Community Forums, and the Product Ideation platform.  In addition, I am working closely with Adam to bring even more to you through an enhanced user experience.  More on this as the planning stages begin to mature.


Before coming to Dell Boomi, I created a small consultancy (Digital Workplace Labs) and was focused on working with customers on social business strategy, design, and solution development aiming to optimize workplace collaboration and performance.  Prior to that, I spent 6 years conceptualizing, building, and directing two enterprise social business programs at Groupon and CallMiner.  Here I am in action passing along community tips and tricks to an audience in Berlin at the annual Enterprise Business Collaboration conference:



I have a passion for creating inspiring work cultures.  From personal experience and connection with leaders in multiple industries, I have been convinced of this vital correlation:  happy employees leads to happy customers.  Through the teachings of TEDx speaker, John Stepper, I believe in a “Working Out Loud" work style, which is making your knowledge purposefully accessible so that it may help others.  I hope to pass along my knowledge of workingoutloud so that everyone may benefit from this interactive communal work style.  I am committed to customer success and helping to empower others to create truly transformational ideas.  Here I am passing along John's teachings to the audience at Intra.NET Reloaded London:



Through my experience building online communities, I have created strong and lasting relationships that go well beyond the digital world.  As Tony Robbins once said, "The only impossible journey is the one you never begin".  I look forward to meeting each of you through the Community and help enhance your experience as a Boomi user and enthusiast.  I hope we will get a chance to meet offline at Boomi World later this year or at another Boomi event.  Please reach out to me any time with your questions, comments, suggestions, or simply to say "Hi" and introduce yourself.


I am based in Jensen Beach, Florida, former pineapple capital of the world.  When I am not online creating a better Boomi Community experience, I enjoy being outdoors taking advantage of all the great natural resources Florida has to offer.


The Office


Play Time


Last thing, please don't forget to connect with me on the Boomi Community as well as LinkedIn and Twitter.


Let's make this journey lasting and memorable!


Your friend and future "Community Oracle",


Adam Arrowsmith

About the Community (no product questions)

Community Forums

In our next Community Champion spotlight, we talk with one of our most active contributors and champions, Srinivas Chandrakanth Vangari.


How do you introduce Boomi to clients?


Srini: When clients want to integrate cloud applications and data, I offer an overall solution framework. And mostly at that stage, what they’re looking for is security and scalability. While every customer circumstance is different, I emphasize that Boomi has several capabilities that fit their integration scenario — app and data integration, EDI, etc.


Read the full interview here: Making Integration Easy: Q&A with Srini Vangari.


Look for more interviews with Community Champions coming soon!

Today, Boomi announced our Fall 2017 platform release. This release highlights key new features and enhancements made generally available in the 2nd half of this year, and includes many capabilities requested by the Boomi Community (Thank you!). 


The new capabilities highlighted in the Fall 2017 release span all products on the Boomi platform. Collectively, these capabilities further improves our customers' ability to efficiently connect everything and engage everywhere across any channel, device or platform.


The key capabilities highlighted in this release are grouped under three functional areas:

Scalability with Security: Support your digital initiatives and exploding data volume, while providing the right people access to the right data.

High Productivity: Reduce development time by simplifying process automation, and leveraging best practices across the entire platform.

Integration Accelerators: Enhance efficiency with new connectors, pre-built components and tools to simplify IT-business collaboration. 


Learn More 

  • Learn more about these new Boomi platform capabilities here
  • Read the Fall 2017 press release here 

Hi all, in my latest blog I look at the questions: “How are Boomi and AI related?” and “How are Boomi and Salesforce Einstein AI related?” I am curious to know your thoughts about Boomi and AI.


Boomi, Salesforce and Einstein: Bringing Intelligence to Your Data - Dell Boomi 


References and Additional Reading:


Thameem Khan is a Principal Solutions Architect, Chief Opinionist (Yes...that's what my boss calls me) at Dell Boomi and has been instrumental in competitive positioning Dell Boomi. He is good at two things: manipulating data and connecting clouds.

In this article, I will explain why an event driven architecture is valuable to the business, discuss different ways to event-enable an application, and provide a step-by-step solution for how to event-enable any database table.


 Get the processes discussed below from the Process Library here.




Why to Event Enable the Enterprise

The Real-Time Enterprise responds to business and data events in real-time or near real-time. The traditional way of processing events is in batch mode once a day usually during off hours. Being able to process events quickly can be very valuable to an enterprise. Richard Hackathorn wrote a very informative article on Real-Time event processing called: The BI Watch Real-Time to Real-Value. Richard talks about the net value of processing an event and how the value decreases the longer it takes to react to that event. Here is a graph that he shows:



Therefore, the faster you can process an event after it happens, the more value you can get. This business event could be a request for a quote from a prospect or changing an opportunity to Closed/Won. The business event can be an opportunity or a threat like someone creating a support ticket to report a bug in your product. Being able to capture the event and react to it quickly can provide tremendous value to the business.


How to Event Enable your Enterprise

Now that I have discussed the business value of processing events in real-time, lets cover the different ways this can be done. The best way to capture events is to utilize the capabilities of the system that generates these events. Many on-premise and cloud based platforms provide this capability OOTB. You should always utilize the 'front door' of an application to use the published API, if one exists. For example, Salesforce has outbound messages that are very easy to configure, NetSuite has SuiteScripts that can emit real-time outbound messages to events that happen. These events could be when someone creates a new account in Salesforce, or when a user updates a Sales Order in NetSuite. On-premises applications like SAP can send outbound IDocs in response to events and Oracle EBS can emit events via Oracle Advanced Queuing. To event enable these applications, leverage their native event architecture.


Event Enable Any Database Table Framework

This article will discuss how you can event enable an application that doesn't have native event processing built in. We will leverage the 'back door' of this application and create an event-enabled architecture using the back end database. There are at least three ways of getting events (inserts, updates, and deletes) from a database table: using the Last Modified Date field, using the database audit log, and using a Change Data Capture (CDC) table with triggers. The Last Modified Date field implementation assumes that there is such a field and you have to query the entire table for updates. Also, getting deletes would be difficult unless the application did a logical delete because a physical delete would entirely remove the record. This may not perform well in a high transaction environment, especially if the table has many rows. While the database audit logging monitoring solution may work, not all databases support this out-of-the-box and you may have to license another product from the database vendor or purchase a third party product. This article will focus on using a CDC table. With this architecture, you don't have to continually query the table for modified rows. It will add a little more time to inserts, updates and deletes to the base table while the triggers execute, but this should be negligible. This CDC implementation can be done in two parts: Design Time (one-time) configuration and Run Time configuration.


Design-Time Configuration

This is a diagram of the design time configuration. Note that a stored procedure will facilitate the creation of the CDC table and the database triggers on the base table. This asset and a Boomi integration process will be provided below. The process will accept one input: the name of the base table. Therefore, the database assets can be created automatically or manually. A DBA could also manually create the CDC table and the triggers.

 Design-Time Configuration Diagram


This solution to event-enable a database table is based upon a base table, a CDC table, some database triggers, some Boomi integration processes, and an Apache ActiveMQ Topic (you could use a Boomi Atom Queue Topic if you have this capability). For example, let's say you want to 'watch' a database table named Person for inserts, updates, and/or deletes and get notified when these events happen with all of the event data. What needs to happen is to create a CDC table (boomi_Person) that has the same columns as the base table (Person) plus a few extra columns to hold contextual information about the event. This will be discussed later in detail. Database triggers then have to be made on the base table (Person) On Insert, On Update, and On Delete. These database triggers would just handle inserting these events into the CDC table (boomi_Person). This will complete the database side of the design time configuration. At this point, whenever any application runs any DML SQL statements against the base table, they will be automatically inserted into the CDC table.


Run-Time Configuration

Now for the run-time configuration. Firstly, download, install, and configure Apache ActiveMQ on a system that the Boomi runtime can access. A Boomi integration process will get all event data in the CDC table ordered by event date, publish each record to an Apache ActiveMQ topic, and delete all the processed records from the CDC table. This process should be deployed and a scheduled job created to automatically execute as frequently as every minute. Another Boomi integration process will subscribe to the ActiveMQ topic and process the event. You could implement an ETL scenario, synchronize the data to another table or database or notify someone about this event. You can implement any type of event processing logic you require. Because we are leveraging the ActiveMQ Topic, other processes can be created and subscribe to the same topic to implement another type of event handling. This is why the ActiveMQ topic is being used. It is possible to implement the event handling logic in the process that pulls the event data, but this is not as decoupled nor as extendable. I would only suggest not using the ActiveMQ topic if you don't want this other system in your architecture or you already have a JMS compliant message bus to leverage. Here is a diagram of how the run-time solution works:




Synchronize to Database Table ETL Implementation

The event processing implementation for this article will synchronize the data changes in the base table to another table. This is a common ETL pattern that can be modified for your specific ETL requirements. The ETL table is called PersonETL and after the configuration of the solution will always look exactly like the base table in the number of rows and the exact same data. Here is a graphical representation of this implementation:



Solution Limitations

This solution was developed on MS SQL Server Express 10, although it should run on most other versions of MS SQL Server. While this event driven architecture can be manually implemented (CDC table and base table triggers) by a DBA for other database types, the stored procedure that creates the CDC table and the base table triggers is implemented in TransactSQL and therefore only runs on SQL Server. These are the SQL Server data types that have been tested and are supported:


NumericDate and TimeString and BinaryOthers

1. bigint

2. decimal

3. float

4. int

5. money

6. numeric

7. real

8. smallint

9. smallmoney

10. tinyint

1. date

2. datetime

3. datetime2

4. datetimeoffset

5. smalldatetime

6. time(7)

1. binary

2. bit

3. char

4. nchar

5. nvarchar

6. varbinary

7. varchar

1. geography

2. geometry

3. hierarchyid

4. uniqueidentifier

5. xml


The following MS SQL Server data types are not supported, mostly because they can't be included in database triggers: image, ntext, nvarchar(MAX), sql_variant, text, timestamp, varbinary(MAX), and varchar(MAX).


Also, this solution doesn't have any error handling nor try/catch shapes. To production-ize the solution, you may want to put some error handling in the Process Person Events from CDC Table and Subscribe to Person Events Topic Boomi integration processes.


Important: This solution is provided as an example and is not intended for production use as-is. It is not delivered nor maintained by Dell Boomi. You should thoroughly evaluate and test for your own scenarios before considering for production use. Use this framework at your own risk.



Steps to Setup this ETL Implementation

  1. Download the file link at the bottom of this article called Event Enable Database Table Extract the contents on your computer.
  2. From the contents of the Event Enable Database Table, find and execute the Create Person Table.sql, Create PersonETL Table.sql, and CreateBoomiEventTableAndTriggers_SP.sql scripts in your instance of MS SQL Server.  You may have to include a 'Use <database name>;' at the top of each script to make sure the assets get created in the proper database.
  3. From your Dell Boomi AtomSphere account, install the Database: Event Enable any Database Table Process Library into your account. This "container" process contains three processes.
  4. Read the process descriptions for each of the three Boomi Integration Processes contained in the Process Library: Event Enable Table by TableNameProcess Person Events from CDC Table, and Subscribe to Person Events Topic. The instructions on how to configure each process to your environment is included in the process description. Basically, you have to enter appropriate information in the SQL Server DB Connection to point to your instance of SQL Server and walk through the wizard for each Database profile.
  5. Open the Event Enable Table by TableName Boomi integration process in your Boomi account and Test it on a Boomi runtime Atom/Molecule/Cloud that has access to your SQL Server instance. Confirm that the CDC table, boomi_Person and 3 triggers got created for the base table: Person.
  6. Make sure you deploy the following Boomi integration processes (as noted in the process descriptions) to your Boomi runtime that has connectivity to your SQL Server instance: Process Person Events from CDC Table and Subscribe to Person Events Topic. Also, be sure to setup a scheduled process to automatically run the Process Person Events from CDC Table process every minute or any other time interval.
  7. Create a topic on ActiveMQ called DatabaseEvents.
  8. This should complete your configuration. You should now have a working implementation of the Event Enable Any Database Table framework.


Testing your Solution

  1. Open the BoomiEventDrivenPersonTableQueries.sql file included in the zipfile below in your favorite MS SQL Server database client.
  2. Run the following SQL statements in the BoomiEventDrivenPersonTableQueries.sql file. They should all return 0 rows.
    1. select * from Person
    2. select * from boomi_Person
    3. select * from PersonETL
  3. Execute the following SQL statements in the BoomiEventDrivenPersonTableQueries.sql file. They should insert 3 records in the Person table.
    1. insert into Person (FirstName, LastName) values ('John', 'Doe')
    2. insert into Person (FirstName, LastName) values ('Jane', 'Smith')
    3. insert into Person (FirstName, LastName) values ('Dell', 'Boomi')
  4. Now run the following 3 queries quickly, before the scheduled job kicks off to process the records in the CDC table: boomi_Person:
    SQL QueryResults
    select * from Person
    select * from boomi_Person
    select * from PersonETLThis query should return 0 rows
  5. After the scheduled job kicks off the Process Person Events from CDC Table process, and you will know this from the Process Reporting page, you will see the following from running the queries again:
    SQL QueryResults
    select * from Person
    select * from boomi_PersonThis query should return 0 rows
    select * from PersonETL
  6. Now run the following SQL statements
    1. update Person SET LastName = 'Stamos' WHERE FirstName = 'John'
    2. update Person SET LastName = 'Johnson' WHERE FirstName = 'Jane'
    3. update Person SET FirstName = 'Go' WHERE LastName = 'Boomi'
  7. Now run the following 3 queries quickly, before the scheduled job kicks off to process the records in the CDC table: boomi_Person:
    SQL QueryResults
    select * from Person
    select * from boomi_Person
    select * from PersonETL
  8. After the scheduled job kicks off the Process Person Events from CDC Table process, you will see the following from running the queries again:
    SQL QueryResults
    select * from Person
    select * from boomi_PersonThis query should return 0 rows
    select * from PersonETL
  9. Run the following SQL Statement
    1. delete from Person
  10. Run the following 3 queries quickly, before the scheduled job kicks off to process the records in the CDC table: boomi_Person:
    SQL QueryResults
    select * from Person0 Rows Returned
    select * from boomi_Person
    select * from PersonETL
  11. After the scheduled job kicks off the Process Person Events from CDC Table process, you will see the following from running the queries again. There should now be 0 rows in all 3 tables.
    1. select * from Person
    2. select * from boomi_Person
    3. select * from PersonETL


User Guide Articles

Here are some links to our User Guide, which you may find useful when using and configuring this event enabled database table.


I would like to thank my former colleague, Steven Kimbleton for creating the stored procedure that is used in this solution.


Harvey Melfi is a Solutions Architect with Dell Boomi.

There are several just whizbang features that people don't come to the Flow team to build as an enterprise. Oftentimes, it's just a quick tool that you wish you had in your tool belt that you can utilize and build on later. This was such a scenario, that worth a quick write-up to get your juices flowing (pun intended) on to what other sorts of things you can build! Ever wanted to just jot down a quick note or reminder and have it save to a Cloud location like Google Drive for edit later? Welp, you've come to the right place to set it up! We'll be building this super simple tool that goes from Flow-->AtomSphere-->Google Sheets: 



1. Create a Google Sheet

Well, first things first, create a Google Sheet that you want to edit via, and note the ID it creates (we'll use that later): 


2. Setup AtomSphere

1. Now we are going to set up AtomSphere to LISTEN for Flow to send it this case we'll say it's "aQuickNote" variable that we want to post to that Google Sheet we just made. Go to AtomSphere and create a new Process: 


2. Set up the Connector to just listen in, like this: 

Note: You don't need a response profile, because we're not going to expect anything in return, we just want to SEND.


3. Configure it with the two variables Google cares about, the VALUE, and the SpreadsheetID: 


It should look super clean with your first shape like this: 


4. Add a Google Sheets connector shape: 


5. Create a new Google Connection if you don't have one already. For this example, we'll start from scratch and show you how to get one. You'll need an API key from Google, and they tell you how to get that HERE.


6. From the Credentials page, select your connection: 


7. Copy/paste your ClientID and Client Secret and paste those into your AtomSphere session:


When you click "Generate", Google will prompt you with a few things: 

And it'll give you this critical piece!  



Note: This is generated by that callback URL when you put your BoomiID above! Make sure that's in there!


Note: You may get an error saying "invalid client" from AtomSphere, if so, make sure you copy/paste without a trailing space (like I just did [doh!]) --and put them back in there, click GENERATE again, and you should see: 


8. Now we've got a connection to Google, we just need to tell it what to DO once we send it the data:


9. Click the PLUS SIGN in the "Operation" field to create a new Operation.


10. This is where Boomi will do all the heavy lifting for you! Click "import":


11. Choose your Google Connection you just made: 


12. Insert that SheetID you created in Step #1!



13. Click next, and you'll be prompted like this (be sure to chose "Record data"):


14. AtomSphere will know to create the proper request and response profiles for you!


15. Click "Save and Close": 


16. Now create a map that sends the data the way that Google wants to see it!: 




17. Now you connect your shapes and deploy the process to production as well as updating the service to call it, like this: 


Your finished map should look about like this: 


3. Setup Flow

1. Create a new Service to send data to Atomsphere (if you don't already have it), like this: 


2. A lot is happening in this next slide, but the idea is that you want to grab your creds from your Atom in AtomSphere (Red arrows) and paste the creds in the corresponding (yellow arrows) fields: 


When you click "Continue" in Boomi Flow, take a look at the "Preview Actions and Types" and you'll see that it was added: 


Save that service with the password (repaste it) and click Save:


3. Now go import that Service into your flow: 



4. Now create a new "Page" from the left side (this is where we'll ask what we want to save there): 


5. Pull an "input" into the navigation pane: 


6. Map the value: 


7. Create a new value with what we want to send over to Google!


8. Save the value. Save the Page. Back to the main screen and drag the Start Shape to the New Page you just made: 


9. Now we gotta send that to Google, via the Service we just made above, so drag in a "Message" component: 


10. Set the values like this: 









11. Ok, now we tie them all together and click "Publish", cross our fingers, and...:


If all went as planned, you should see the same thing! Check it out: Boomi Flow to Google Docs! - YouTube 


Congrats, you just made a quick place to save notes in no time!


Extra credit: 

Let's clear out the variables that it saves, so it can be a clean slate every time!


Drag on an "Operator" in Flow, and configure it to set that value to "Empty" every time to loop back, like this: 


Let's see what that does now when the map looks like this: 


One more neat trick, it is mobile-friendly, out of the box!  See this --> Mobile (Boomi Flow) to Google Docs! - YouTube 


Congratulations, you just made a quick Reminder App via Boomi!


Ok, Ok, ONE more extra credit-worthy piece --How about a way to add-in and SEE your current list?  Easy enough, just drag in a new "Step" (See current list), and use an iframe to call the google sheet, like this:



Now it'll look like this from the "current" tab: 

Want to save this JSON for your own? Another fine aspect of Boomi Flow --Download the Flow config JSON file attached below and run it for yourself to see the magic!


Andy Tillo is a Boomi Sales Engineer for the Boomi Flow product based in Seattle, WA. I come from an infrastructure background, so it is nice to have something more code-based to sink my teeth into! Boomi Flow has been a great platform to get there!

Today we had a call around GetGuru and how we could use it internally. GetGuru is a knowledge management solution that we've been trying to enable easier use/context around. I heard that it had an API we could hook into, so I thought this a perfect situation to try to tie our frontend (Boomi Flow) to a backend piece from GetGuru. In this post we'll show you how to add new knowledge cards to your GetGuru environment by way of Boomi Flow and AtomSphere!



The Big Idea

Ok, so here's what we're trying to do here:

  1. Create a Boomi Flow to present a UI to enter information to create a new Guru card.
  2. Upon submission, the Flow invokes an AtomSphere integration process using the Flow Service Listener connector.
  3. The process uses the HTTP Client connector to call a Guru API to create the card.


Ready? Let's get started!


Setting Up Postman

To help understand the GetGuru API, grab some internal IDs to use later, and create some sample request and response data to assist with importing Profiles in AtomSphere, a great utility to use is Postman. Postman acts as a middle-man and can see all the data as to where it is going, and what response it wants it in.


So…we can deduce what to send from this:


Specifically this:

getguru says this:


Postman is a great place to start, as you know what to enter in each of those because of two things:

  1. The very first page of GetGuru says this:
    To test your authentication credentials, you can use the curl command line tool.
    curl -u USER:TOKEN -D -

If your credentials were correct, you should receive a response that lists information about your Guru team:


HTTP/1.1 200 OK
Content-Type: application/json


[ {
 "status" : "ACTIVE",
 "id" : "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
 "dateCreated" : "2016-01-01T00:00:00.000+0000",
 "name" : "My Guru Team"


To validate you’re able to send, use curl:


C:\curl\src>curl -u -D -

HTTP/1.1 200 OK

Content-Type: application/json

Server: Jetty(9.4.1.v20170120)

transfer-encoding: chunked

Connection: keep-alive


[ {

 "status" : "ACTIVE",

 "id" : "d50be168-8e81-48c9-8306-6d4eea484ef3",

 "dateCreated" : "2017-09-14T02:20:08.197+0000",

 "totalUsers" : 1,

 "profilePicUrl" : "",

 "name" : "FlowTest"

} ]


C:\curl\src>curl -u

[ {

 "lastModified" : "2017-09-15T02:41:39.176+0000",

 "id" : "81d55e9c-3922-46cf-be89-877477e6f3ca",

 "collection" : {

   "id" : "c576da5d-80d1-4606-9543-7da62ee13421",

   "name" : "General"


 "lastModifiedBy" : {

   "status" : "ACTIVE",

   "email" : "",

   "firstName" : "Andy",

   "lastName" : "Tillo"



So now we know the BoardID and the CollectionID that GetGuru says we need to have to make a call, as well as what GetGuru WANTS to receive, in JSON format, we can deduce that we can copy that GetGuru snippet, and save it to a .JSON file on your desktop. Call it Getguru_WANTS_THIS_INPUT.JSON.


The file should look like this:



 "preferredPhrase": "What _I_need",

 "content": "content_goes_here",

 "shareStatus": "TEAM",

 "collection" : { "id" : "c576da5d-80d1-4606-9543-7da62ee13421" },

 "boards": [{"id" : "81d55e9c-3922-46cf-be89-877477e6f3ca"}]


A little tip: You can put whatever you want in between the “” --Boomi takes that out and just uses the skeleton of what is in there!


Setting up AtomSphere

Now make that code map INTO the HTTP client header here:


Opening that link up looks like this (we’ll talk about the left side later on):


To get the data on that right side, click “choose” (make sure dropdown is JSON):




Now you can map the fields like this (cause it knows the JSON format now):


Now send it on to the HTTP Client connector:


Here's the HTTP Client Connection:


And the HTTP Client Operation (note all you have to change is the JSON input header):


Now it’s going to come OUT of the HTTP connector (from GetGuru) with a bunch of new stuff! Conveniently, stuff we saw when we ran the Postman query at the beginning, remember?


So all that stuff below, is what the RESPONSE from GetGuru is sending back, ergo, we can now copy that whole bunch of text there, and do the same thing we did with the first map. It’s now a sample of what we want to get OUT and eventually send back to Flow! Copy/Paste it all into a new JSON profile with “Import” the same way as before, and open the next map:


Now, again, do the same mapping dragging/dropping exercise! But this time, with that OUTPUT from GetGuru, and map it BACK to what we originally were LISTENING for from Flow:


So this is the part where we talk about that piece I said we’d talk about later --The first shape, the LISTENER. Because that is ultimately what we’re mapping BACK to. We LISTEN for Flow to send us something, and then take that data and push it through the HTTP Client connector, and ultimately back to Flow with the same variables as we requested in the first place.


Don’t forget this little guy (return documents), he’s important:


Here’s what the Start shape config looks like (the LISTENER):


Click the pencil in “operation” and it’ll show you this:


These guys are just saying “LISTEN” for a request (profile). It’s just sitting there, waiting for Flow to talk to it. When it receives something, it starts its business. Here’s that first listener call:


(We’ll show how we get those variables in the next section, let’s finish up with this one first.)


The RESPONSE PROFILE: “Give this back to Mwho” is a set of variables that you’re telling Flow to keep an eye out for, cause they’ll be coming back to you --notice they are the EXACTLY SAME SPELLING as the last Map Shape that we tied them to --don’t mess that one up, or you’ll be banging your head for a while! 


NOTE: In order to get response back from AtomSphere the Output Names in Message Action must match the Response Profile Names from AtomSphere Listener.

If any changes are made to the profiles in the AtomSphere Flow Listener process then you must update/refresh the Boomi flow service associated with that Process.



Then DEPLOY that process:


And deploy the Flow Service that ties that action to your Process:


So as everything is tied up on the AtomSphere side, let’s go see what we set up in Flow…


Setting up Flow

It’s a very simple flow to create in Flow. (Or if you want to take the easy way out, get the Flow config from the attached JSON file below!)


The first thing you want to do is import that AtomSphere connector in your tenant. Click Services-->Install Service.


Call it whatever you want, and the URL is based on the Flow Service setup in AtomSphere:


You get the first part of your URL from AtomSphere-->Manage→:


You get the second part of your URL for the Service in AtomSphere from the Flow Service here:


You get your username and password from AtomSphere-->Manage→[Atom of choice]-->Shared Web Server:


Once you enter that info, click “Save Service”:


Now we’re cookin’! You have mapped AtomSphere to Flow, and Flow to AtomSphere now!


Go back to your Flow and click (or make) the first page component:


Edit page layout. Notice you’re just mapping components now:


Create each component as an “input” with your choice of variable name (type: string) on the right side:


The magic comes in when you click (or create) the “send to getguru” button. This is where the magic happens!


Note: Prior to that, you may need to setup the “New message service” and choose your service:


OTHER NOTE: if you DON’T see your service in there, Navigate to Shared Elements at the top right corner to import it:



Now, back to the message config we were talking about:


Notice how the INPUTS are the same name as the REQUEST PROFILE in AtomSphere:


Then note how the OUTPUTS are the same as the RESPONSE profile in AtomSphere LISTENER process; convenient, huh?


These need to be spelled EXACTLY the same, don’t forget that --these are the magic sauce! Note the VALUE names on the OUTPUTs --these are what we’re going to be calling in the last STEP shape in Flow, to show the user after it’s gone through the pipe!


The final step is just showing back to the page, those variables that you captured in the OUTPUT section:


We’re done! Now let’s see how it works. Publishing/running the flow looks like this:


...and when we hit “send” it looks like this:


Bada boom, you just made a card in via Flow and Boomi!

Extra Credit

A trick of the trade (or a test of your skills of what you’ve learned) --you now have the CardID that was given back to you up above:

CardID: d9206c70-22bb-4fcc-90c7-1d8d72ce15f9


You can navigate directly to a card like this:<CARD_ID_HERE>. How can you concatenate some strings together to show the user a link like that, instead of just a Card ID?


Andy Tillo is a Boomi Sales Engineer for the Boomi Flow product based in Seattle, WA. I come from an infrastructure background, so it is nice to have something more code-based to sink my teeth into! Boomi Flow has been a great platform to get there!

It's been a little while since my last post--it's a very exciting and busy time here at Boomi as we grow rapidly. Recently, I sat down with the folks at TalkingIO to talk about everything from IPaaS and bimodal IT, to microservices, serverless architecture, and even the solar eclipse.


Listen to the podcast here: Episode 4: Boomi always, happy to hear your comments and feedback!


Also, quick plug for Boomi World next week! Be sure to check out my session on "Future Trends for the Connected Business". Hope to see you there!


Thameem Khan is a Principal Enterprise Architect at Dell Boomi and has been instrumental in competitive positioning Dell Boomi to our Large enterprise customers. He is good at two things: manipulating data and connecting clouds.