Skip navigation
All Places > Boomi Buzz > Blog
1 2 3 4 Previous Next

Boomi Buzz

47 posts

In our next Community Champion spotlight, we talk with one of our most active contributors and champions, Srinivas Chandrakanth Vangari.


How do you introduce Boomi to clients?


Srini: When clients want to integrate cloud applications and data, I offer an overall solution framework. And mostly at that stage, what they’re looking for is security and scalability. While every customer circumstance is different, I emphasize that Boomi has several capabilities that fit their integration scenario — app and data integration, EDI, etc.


Read the full interview here: Making Integration Easy: Q&A with Srini Vangari.


Look for more interviews with Community Champions coming soon!

Today, Boomi announced our Fall 2017 platform release. This release highlights key new features and enhancements made generally available in the 2nd half of this year, and includes many capabilities requested by the Boomi Community (Thank you!). 


The new capabilities highlighted in the Fall 2017 release span all products on the Boomi platform. Collectively, these capabilities further improves our customers' ability to efficiently connect everything and engage everywhere across any channel, device or platform.


The key capabilities highlighted in this release are grouped under three functional areas:

Scalability with Security: Support your digital initiatives and exploding data volume, while providing the right people access to the right data.

High Productivity: Reduce development time by simplifying process automation, and leveraging best practices across the entire platform.

Integration Accelerators: Enhance efficiency with new connectors, pre-built components and tools to simplify IT-business collaboration. 


Learn More 

  • Learn more about these new Boomi platform capabilities here
  • Read the Fall 2017 press release here 

Hi all, in my latest blog I look at the questions: “How are Boomi and AI related?” and “How are Boomi and Salesforce Einstein AI related?” I am curious to know your thoughts about Boomi and AI.


Boomi, Salesforce and Einstein: Bringing Intelligence to Your Data - Dell Boomi 


References and Additional Reading:


Thameem Khan is a Principal Solutions Architect, Chief Opinionist (Yes...that's what my boss calls me) at Dell Boomi and has been instrumental in competitive positioning Dell Boomi. He is good at two things: manipulating data and connecting clouds.

In this article, I will explain why an event driven architecture is valuable to the business, discuss different ways to event-enable an application, and provide a step-by-step solution for how to event-enable any database table.


 Get the processes discussed below from the Process Library here.




Why to Event Enable the Enterprise

The Real-Time Enterprise responds to business and data events in real-time or near real-time. The traditional way of processing events is in batch mode once a day usually during off hours. Being able to process events quickly can be very valuable to an enterprise. Richard Hackathorn wrote a very informative article on Real-Time event processing called: The BI Watch Real-Time to Real-Value. Richard talks about the net value of processing an event and how the value decreases the longer it takes to react to that event. Here is a graph that he shows:



Therefore, the faster you can process an event after it happens, the more value you can get. This business event could be a request for a quote from a prospect or changing an opportunity to Closed/Won. The business event can be an opportunity or a threat like someone creating a support ticket to report a bug in your product. Being able to capture the event and react to it quickly can provide tremendous value to the business.


How to Event Enable your Enterprise

Now that I have discussed the business value of processing events in real-time, lets cover the different ways this can be done. The best way to capture events is to utilize the capabilities of the system that generates these events. Many on-premise and cloud based platforms provide this capability OOTB. You should always utilize the 'front door' of an application to use the published API, if one exists. For example, Salesforce has outbound messages that are very easy to configure, NetSuite has SuiteScripts that can emit real-time outbound messages to events that happen. These events could be when someone creates a new account in Salesforce, or when a user updates a Sales Order in NetSuite. On-premises applications like SAP can send outbound IDocs in response to events and Oracle EBS can emit events via Oracle Advanced Queuing. To event enable these applications, leverage their native event architecture.


Event Enable Any Database Table Framework

This article will discuss how you can event enable an application that doesn't have native event processing built in. We will leverage the 'back door' of this application and create an event-enabled architecture using the back end database. There are at least three ways of getting events (inserts, updates, and deletes) from a database table: using the Last Modified Date field, using the database audit log, and using a Change Data Capture (CDC) table with triggers. The Last Modified Date field implementation assumes that there is such a field and you have to query the entire table for updates. Also, getting deletes would be difficult unless the application did a logical delete because a physical delete would entirely remove the record. This may not perform well in a high transaction environment, especially if the table has many rows. While the database audit logging monitoring solution may work, not all databases support this out-of-the-box and you may have to license another product from the database vendor or purchase a third party product. This article will focus on using a CDC table. With this architecture, you don't have to continually query the table for modified rows. It will add a little more time to inserts, updates and deletes to the base table while the triggers execute, but this should be negligible. This CDC implementation can be done in two parts: Design Time (one-time) configuration and Run Time configuration.


Design-Time Configuration

This is a diagram of the design time configuration. Note that a stored procedure will facilitate the creation of the CDC table and the database triggers on the base table. This asset and a Boomi integration process will be provided below. The process will accept one input: the name of the base table. Therefore, the database assets can be created automatically or manually. A DBA could also manually create the CDC table and the triggers.

 Design-Time Configuration Diagram


This solution to event-enable a database table is based upon a base table, a CDC table, some database triggers, some Boomi integration processes, and an Apache ActiveMQ Topic (you could use a Boomi Atom Queue Topic if you have this capability). For example, let's say you want to 'watch' a database table named Person for inserts, updates, and/or deletes and get notified when these events happen with all of the event data. What needs to happen is to create a CDC table (boomi_Person) that has the same columns as the base table (Person) plus a few extra columns to hold contextual information about the event. This will be discussed later in detail. Database triggers then have to be made on the base table (Person) On Insert, On Update, and On Delete. These database triggers would just handle inserting these events into the CDC table (boomi_Person). This will complete the database side of the design time configuration. At this point, whenever any application runs any DML SQL statements against the base table, they will be automatically inserted into the CDC table.


Run-Time Configuration

Now for the run-time configuration. Firstly, download, install, and configure Apache ActiveMQ on a system that the Boomi runtime can access. A Boomi integration process will get all event data in the CDC table ordered by event date, publish each record to an Apache ActiveMQ topic, and delete all the processed records from the CDC table. This process should be deployed and a scheduled job created to automatically execute as frequently as every minute. Another Boomi integration process will subscribe to the ActiveMQ topic and process the event. You could implement an ETL scenario, synchronize the data to another table or database or notify someone about this event. You can implement any type of event processing logic you require. Because we are leveraging the ActiveMQ Topic, other processes can be created and subscribe to the same topic to implement another type of event handling. This is why the ActiveMQ topic is being used. It is possible to implement the event handling logic in the process that pulls the event data, but this is not as decoupled nor as extendable. I would only suggest not using the ActiveMQ topic if you don't want this other system in your architecture or you already have a JMS compliant message bus to leverage. Here is a diagram of how the run-time solution works:




Synchronize to Database Table ETL Implementation

The event processing implementation for this article will synchronize the data changes in the base table to another table. This is a common ETL pattern that can be modified for your specific ETL requirements. The ETL table is called PersonETL and after the configuration of the solution will always look exactly like the base table in the number of rows and the exact same data. Here is a graphical representation of this implementation:



Solution Limitations

This solution was developed on MS SQL Server Express 10, although it should run on most other versions of MS SQL Server. While this event driven architecture can be manually implemented (CDC table and base table triggers) by a DBA for other database types, the stored procedure that creates the CDC table and the base table triggers is implemented in TransactSQL and therefore only runs on SQL Server. These are the SQL Server data types that have been tested and are supported:


NumericDate and TimeString and BinaryOthers

1. bigint

2. decimal

3. float

4. int

5. money

6. numeric

7. real

8. smallint

9. smallmoney

10. tinyint

1. date

2. datetime

3. datetime2

4. datetimeoffset

5. smalldatetime

6. time(7)

1. binary

2. bit

3. char

4. nchar

5. nvarchar

6. varbinary

7. varchar

1. geography

2. geometry

3. hierarchyid

4. uniqueidentifier

5. xml


The following MS SQL Server data types are not supported, mostly because they can't be included in database triggers: image, ntext, nvarchar(MAX), sql_variant, text, timestamp, varbinary(MAX), and varchar(MAX).


Also, this solution doesn't have any error handling nor try/catch shapes. To production-ize the solution, you may want to put some error handling in the Process Person Events from CDC Table and Subscribe to Person Events Topic Boomi integration processes.


Important: This solution is provided as an example and is not intended for production use as-is. It is not delivered nor maintained by Dell Boomi. You should thoroughly evaluate and test for your own scenarios before considering for production use. Use this framework at your own risk.



Steps to Setup this ETL Implementation

  1. Download the file link at the bottom of this article called Event Enable Database Table Extract the contents on your computer.
  2. From the contents of the Event Enable Database Table, find and execute the Create Person Table.sql, Create PersonETL Table.sql, and CreateBoomiEventTableAndTriggers_SP.sql scripts in your instance of MS SQL Server.  You may have to include a 'Use <database name>;' at the top of each script to make sure the assets get created in the proper database.
  3. From your Dell Boomi AtomSphere account, install the Database: Event Enable any Database Table Process Library into your account. This "container" process contains three processes.
  4. Read the process descriptions for each of the three Boomi Integration Processes contained in the Process Library: Event Enable Table by TableNameProcess Person Events from CDC Table, and Subscribe to Person Events Topic. The instructions on how to configure each process to your environment is included in the process description. Basically, you have to enter appropriate information in the SQL Server DB Connection to point to your instance of SQL Server and walk through the wizard for each Database profile.
  5. Open the Event Enable Table by TableName Boomi integration process in your Boomi account and Test it on a Boomi runtime Atom/Molecule/Cloud that has access to your SQL Server instance. Confirm that the CDC table, boomi_Person and 3 triggers got created for the base table: Person.
  6. Make sure you deploy the following Boomi integration processes (as noted in the process descriptions) to your Boomi runtime that has connectivity to your SQL Server instance: Process Person Events from CDC Table and Subscribe to Person Events Topic. Also, be sure to setup a scheduled process to automatically run the Process Person Events from CDC Table process every minute or any other time interval.
  7. Create a topic on ActiveMQ called DatabaseEvents.
  8. This should complete your configuration. You should now have a working implementation of the Event Enable Any Database Table framework.


Testing your Solution

  1. Open the BoomiEventDrivenPersonTableQueries.sql file included in the zipfile below in your favorite MS SQL Server database client.
  2. Run the following SQL statements in the BoomiEventDrivenPersonTableQueries.sql file. They should all return 0 rows.
    1. select * from Person
    2. select * from boomi_Person
    3. select * from PersonETL
  3. Execute the following SQL statements in the BoomiEventDrivenPersonTableQueries.sql file. They should insert 3 records in the Person table.
    1. insert into Person (FirstName, LastName) values ('John', 'Doe')
    2. insert into Person (FirstName, LastName) values ('Jane', 'Smith')
    3. insert into Person (FirstName, LastName) values ('Dell', 'Boomi')
  4. Now run the following 3 queries quickly, before the scheduled job kicks off to process the records in the CDC table: boomi_Person:
    SQL QueryResults
    select * from Person
    select * from boomi_Person
    select * from PersonETLThis query should return 0 rows
  5. After the scheduled job kicks off the Process Person Events from CDC Table process, and you will know this from the Process Reporting page, you will see the following from running the queries again:
    SQL QueryResults
    select * from Person
    select * from boomi_PersonThis query should return 0 rows
    select * from PersonETL
  6. Now run the following SQL statements
    1. update Person SET LastName = 'Stamos' WHERE FirstName = 'John'
    2. update Person SET LastName = 'Johnson' WHERE FirstName = 'Jane'
    3. update Person SET FirstName = 'Go' WHERE LastName = 'Boomi'
  7. Now run the following 3 queries quickly, before the scheduled job kicks off to process the records in the CDC table: boomi_Person:
    SQL QueryResults
    select * from Person
    select * from boomi_Person
    select * from PersonETL
  8. After the scheduled job kicks off the Process Person Events from CDC Table process, you will see the following from running the queries again:
    SQL QueryResults
    select * from Person
    select * from boomi_PersonThis query should return 0 rows
    select * from PersonETL
  9. Run the following SQL Statement
    1. delete from Person
  10. Run the following 3 queries quickly, before the scheduled job kicks off to process the records in the CDC table: boomi_Person:
    SQL QueryResults
    select * from Person0 Rows Returned
    select * from boomi_Person
    select * from PersonETL
  11. After the scheduled job kicks off the Process Person Events from CDC Table process, you will see the following from running the queries again. There should now be 0 rows in all 3 tables.
    1. select * from Person
    2. select * from boomi_Person
    3. select * from PersonETL


User Guide Articles

Here are some links to our User Guide, which you may find useful when using and configuring this event enabled database table.


I would like to thank my former colleague, Steven Kimbleton for creating the stored procedure that is used in this solution.


Harvey Melfi is a Solutions Architect with Dell Boomi.

There are several just whizbang features that people don't come to the Flow team to build as an enterprise. Oftentimes, it's just a quick tool that you wish you had in your tool belt that you can utilize and build on later. This was such a scenario, that worth a quick write-up to get your juices flowing (pun intended) on to what other sorts of things you can build! Ever wanted to just jot down a quick note or reminder and have it save to a Cloud location like Google Drive for edit later? Welp, you've come to the right place to set it up! We'll be building this super simple tool that goes from Flow-->AtomSphere-->Google Sheets: 



1. Create a Google Sheet

Well, first things first, create a Google Sheet that you want to edit via, and note the ID it creates (we'll use that later): 


2. Setup AtomSphere

1. Now we are going to set up AtomSphere to LISTEN for Flow to send it this case we'll say it's "aQuickNote" variable that we want to post to that Google Sheet we just made. Go to AtomSphere and create a new Process: 


2. Set up the Connector to just listen in, like this: 

Note: You don't need a response profile, because we're not going to expect anything in return, we just want to SEND.


3. Configure it with the two variables Google cares about, the VALUE, and the SpreadsheetID: 


It should look super clean with your first shape like this: 


4. Add a Google Sheets connector shape: 


5. Create a new Google Connection if you don't have one already. For this example, we'll start from scratch and show you how to get one. You'll need an API key from Google, and they tell you how to get that HERE.


6. From the Credentials page, select your connection: 


7. Copy/paste your ClientID and Client Secret and paste those into your AtomSphere session:


When you click "Generate", Google will prompt you with a few things: 

And it'll give you this critical piece!  



Note: This is generated by that callback URL when you put your BoomiID above! Make sure that's in there!


Note: You may get an error saying "invalid client" from AtomSphere, if so, make sure you copy/paste without a trailing space (like I just did [doh!]) --and put them back in there, click GENERATE again, and you should see: 


8. Now we've got a connection to Google, we just need to tell it what to DO once we send it the data:


9. Click the PLUS SIGN in the "Operation" field to create a new Operation.


10. This is where Boomi will do all the heavy lifting for you! Click "import":


11. Choose your Google Connection you just made: 


12. Insert that SheetID you created in Step #1!



13. Click next, and you'll be prompted like this (be sure to chose "Record data"):


14. AtomSphere will know to create the proper request and response profiles for you!


15. Click "Save and Close": 


16. Now create a map that sends the data the way that Google wants to see it!: 




17. Now you connect your shapes and deploy the process to production as well as updating the service to call it, like this: 


Your finished map should look about like this: 


3. Setup Flow

1. Create a new Service to send data to Atomsphere (if you don't already have it), like this: 


2. A lot is happening in this next slide, but the idea is that you want to grab your creds from your Atom in AtomSphere (Red arrows) and paste the creds in the corresponding (yellow arrows) fields: 


When you click "Continue" in Boomi Flow, take a look at the "Preview Actions and Types" and you'll see that it was added: 


Save that service with the password (repaste it) and click Save:


3. Now go import that Service into your flow: 



4. Now create a new "Page" from the left side (this is where we'll ask what we want to save there): 


5. Pull an "input" into the navigation pane: 


6. Map the value: 


7. Create a new value with what we want to send over to Google!


8. Save the value. Save the Page. Back to the main screen and drag the Start Shape to the New Page you just made: 


9. Now we gotta send that to Google, via the Service we just made above, so drag in a "Message" component: 


10. Set the values like this: 









11. Ok, now we tie them all together and click "Publish", cross our fingers, and...:


If all went as planned, you should see the same thing! Check it out: Boomi Flow to Google Docs! - YouTube 


Congrats, you just made a quick place to save notes in no time!


Extra credit: 

Let's clear out the variables that it saves, so it can be a clean slate every time!


Drag on an "Operator" in Flow, and configure it to set that value to "Empty" every time to loop back, like this: 


Let's see what that does now when the map looks like this: 


One more neat trick, it is mobile-friendly, out of the box!  See this --> Mobile (Boomi Flow) to Google Docs! - YouTube 


Congratulations, you just made a quick Reminder App via Boomi!


Ok, Ok, ONE more extra credit-worthy piece --How about a way to add-in and SEE your current list?  Easy enough, just drag in a new "Step" (See current list), and use an iframe to call the google sheet, like this:



Now it'll look like this from the "current" tab: 

Want to save this JSON for your own? Another fine aspect of Boomi Flow --Download the Flow config JSON file attached below and run it for yourself to see the magic!


Andy Tillo is a Boomi Sales Engineer for the Boomi Flow product based in Seattle, WA. I come from an infrastructure background, so it is nice to have something more code-based to sink my teeth into! Boomi Flow has been a great platform to get there!

Today we had a call around GetGuru and how we could use it internally. GetGuru is a knowledge management solution that we've been trying to enable easier use/context around. I heard that it had an API we could hook into, so I thought this a perfect situation to try to tie our frontend (Boomi Flow) to a backend piece from GetGuru. In this post we'll show you how to add new knowledge cards to your GetGuru environment by way of Boomi Flow and AtomSphere!



The Big Idea

Ok, so here's what we're trying to do here:

  1. Create a Boomi Flow to present a UI to enter information to create a new Guru card.
  2. Upon submission, the Flow invokes an AtomSphere integration process using the Flow Service Listener connector.
  3. The process uses the HTTP Client connector to call a Guru API to create the card.


Ready? Let's get started!


Setting Up Postman

To help understand the GetGuru API, grab some internal IDs to use later, and create some sample request and response data to assist with importing Profiles in AtomSphere, a great utility to use is Postman. Postman acts as a middle-man and can see all the data as to where it is going, and what response it wants it in.


So…we can deduce what to send from this:


Specifically this:

getguru says this:


Postman is a great place to start, as you know what to enter in each of those because of two things:

  1. The very first page of GetGuru says this:
    To test your authentication credentials, you can use the curl command line tool.
    curl -u USER:TOKEN -D -

If your credentials were correct, you should receive a response that lists information about your Guru team:


HTTP/1.1 200 OK
Content-Type: application/json


[ {
 "status" : "ACTIVE",
 "id" : "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
 "dateCreated" : "2016-01-01T00:00:00.000+0000",
 "name" : "My Guru Team"


To validate you’re able to send, use curl:


C:\curl\src>curl -u -D -

HTTP/1.1 200 OK

Content-Type: application/json

Server: Jetty(9.4.1.v20170120)

transfer-encoding: chunked

Connection: keep-alive


[ {

 "status" : "ACTIVE",

 "id" : "d50be168-8e81-48c9-8306-6d4eea484ef3",

 "dateCreated" : "2017-09-14T02:20:08.197+0000",

 "totalUsers" : 1,

 "profilePicUrl" : "",

 "name" : "FlowTest"

} ]


C:\curl\src>curl -u

[ {

 "lastModified" : "2017-09-15T02:41:39.176+0000",

 "id" : "81d55e9c-3922-46cf-be89-877477e6f3ca",

 "collection" : {

   "id" : "c576da5d-80d1-4606-9543-7da62ee13421",

   "name" : "General"


 "lastModifiedBy" : {

   "status" : "ACTIVE",

   "email" : "",

   "firstName" : "Andy",

   "lastName" : "Tillo"



So now we know the BoardID and the CollectionID that GetGuru says we need to have to make a call, as well as what GetGuru WANTS to receive, in JSON format, we can deduce that we can copy that GetGuru snippet, and save it to a .JSON file on your desktop. Call it Getguru_WANTS_THIS_INPUT.JSON.


The file should look like this:



 "preferredPhrase": "What _I_need",

 "content": "content_goes_here",

 "shareStatus": "TEAM",

 "collection" : { "id" : "c576da5d-80d1-4606-9543-7da62ee13421" },

 "boards": [{"id" : "81d55e9c-3922-46cf-be89-877477e6f3ca"}]


A little tip: You can put whatever you want in between the “” --Boomi takes that out and just uses the skeleton of what is in there!


Setting up AtomSphere

Now make that code map INTO the HTTP client header here:


Opening that link up looks like this (we’ll talk about the left side later on):


To get the data on that right side, click “choose” (make sure dropdown is JSON):




Now you can map the fields like this (cause it knows the JSON format now):


Now send it on to the HTTP Client connector:


Here's the HTTP Client Connection:


And the HTTP Client Operation (note all you have to change is the JSON input header):


Now it’s going to come OUT of the HTTP connector (from GetGuru) with a bunch of new stuff! Conveniently, stuff we saw when we ran the Postman query at the beginning, remember?


So all that stuff below, is what the RESPONSE from GetGuru is sending back, ergo, we can now copy that whole bunch of text there, and do the same thing we did with the first map. It’s now a sample of what we want to get OUT and eventually send back to Flow! Copy/Paste it all into a new JSON profile with “Import” the same way as before, and open the next map:


Now, again, do the same mapping dragging/dropping exercise! But this time, with that OUTPUT from GetGuru, and map it BACK to what we originally were LISTENING for from Flow:


So this is the part where we talk about that piece I said we’d talk about later --The first shape, the LISTENER. Because that is ultimately what we’re mapping BACK to. We LISTEN for Flow to send us something, and then take that data and push it through the HTTP Client connector, and ultimately back to Flow with the same variables as we requested in the first place.


Don’t forget this little guy (return documents), he’s important:


Here’s what the Start shape config looks like (the LISTENER):


Click the pencil in “operation” and it’ll show you this:


These guys are just saying “LISTEN” for a request (profile). It’s just sitting there, waiting for Flow to talk to it. When it receives something, it starts its business. Here’s that first listener call:


(We’ll show how we get those variables in the next section, let’s finish up with this one first.)


The RESPONSE PROFILE: “Give this back to Mwho” is a set of variables that you’re telling Flow to keep an eye out for, cause they’ll be coming back to you --notice they are the EXACTLY SAME SPELLING as the last Map Shape that we tied them to --don’t mess that one up, or you’ll be banging your head for a while! 


NOTE: In order to get response back from AtomSphere the Output Names in Message Action must match the Response Profile Names from AtomSphere Listener.

If any changes are made to the profiles in the AtomSphere Flow Listener process then you must update/refresh the Boomi flow service associated with that Process.



Then DEPLOY that process:


And deploy the Flow Service that ties that action to your Process:


So as everything is tied up on the AtomSphere side, let’s go see what we set up in Flow…


Setting up Flow

It’s a very simple flow to create in Flow. (Or if you want to take the easy way out, get the Flow config from the attached JSON file below!)


The first thing you want to do is import that AtomSphere connector in your tenant. Click Services-->Install Service.


Call it whatever you want, and the URL is based on the Flow Service setup in AtomSphere:


You get the first part of your URL from AtomSphere-->Manage→:


You get the second part of your URL for the Service in AtomSphere from the Flow Service here:


You get your username and password from AtomSphere-->Manage→[Atom of choice]-->Shared Web Server:


Once you enter that info, click “Save Service”:


Now we’re cookin’! You have mapped AtomSphere to Flow, and Flow to AtomSphere now!


Go back to your Flow and click (or make) the first page component:


Edit page layout. Notice you’re just mapping components now:


Create each component as an “input” with your choice of variable name (type: string) on the right side:


The magic comes in when you click (or create) the “send to getguru” button. This is where the magic happens!


Note: Prior to that, you may need to setup the “New message service” and choose your service:


OTHER NOTE: if you DON’T see your service in there, Navigate to Shared Elements at the top right corner to import it:



Now, back to the message config we were talking about:


Notice how the INPUTS are the same name as the REQUEST PROFILE in AtomSphere:


Then note how the OUTPUTS are the same as the RESPONSE profile in AtomSphere LISTENER process; convenient, huh?


These need to be spelled EXACTLY the same, don’t forget that --these are the magic sauce! Note the VALUE names on the OUTPUTs --these are what we’re going to be calling in the last STEP shape in Flow, to show the user after it’s gone through the pipe!


The final step is just showing back to the page, those variables that you captured in the OUTPUT section:


We’re done! Now let’s see how it works. Publishing/running the flow looks like this:


...and when we hit “send” it looks like this:


Bada boom, you just made a card in via Flow and Boomi!

Extra Credit

A trick of the trade (or a test of your skills of what you’ve learned) --you now have the CardID that was given back to you up above:

CardID: d9206c70-22bb-4fcc-90c7-1d8d72ce15f9


You can navigate directly to a card like this:<CARD_ID_HERE>. How can you concatenate some strings together to show the user a link like that, instead of just a Card ID?


Andy Tillo is a Boomi Sales Engineer for the Boomi Flow product based in Seattle, WA. I come from an infrastructure background, so it is nice to have something more code-based to sink my teeth into! Boomi Flow has been a great platform to get there!

It's been a little while since my last post--it's a very exciting and busy time here at Boomi as we grow rapidly. Recently, I sat down with the folks at TalkingIO to talk about everything from IPaaS and bimodal IT, to microservices, serverless architecture, and even the solar eclipse.


Listen to the podcast here: Episode 4: Boomi always, happy to hear your comments and feedback!


Also, quick plug for Boomi World next week! Be sure to check out my session on "Future Trends for the Connected Business". Hope to see you there!


Thameem Khan is a Principal Enterprise Architect at Dell Boomi and has been instrumental in competitive positioning Dell Boomi to our Large enterprise customers. He is good at two things: manipulating data and connecting clouds.

We're interested in hearing our customers' thoughts regarding the names of our technology platform and products. I am writing to ask that you please take a few minutes to answer a brief survey. It should only take about 10 minutes of your time.


You can access the survey at:


In appreciation, all those who complete the survey can be entered into a drawing to receive one of two $150 gift certificates to


Thank you for your feedback!



How can increased automation improve internal collaboration and productivity? How do workflow apps help to create a better overall customer experience? Which industry trends are pushing businesses to prioritize workflow automation?

Now you can get answers to these questions and more straight from the industry expert in low-code application development: Steve Wood, founder of Manywho and VP & GM of Boomi Flow.


On Tuesday, Aug. 29, you are invited to join Boomi’s first ever LIVE Twitter Q&A session from 11:30 am – 12:00 pm PST. Steve will be answering any low-code or workflow related questions from attendees throughout our 30-minute session, meaning it’s your opportunity to get expert answers to your most important questions in real-time!


Our live session is open to IT and non-IT pros alike. So whether you are a professional developer looking to learn more about low-code environments, or a business analyst interested in the productivity benefits of increased automation, you can get answers from Steve and learn how Boomi Flow can help you achieve your most important business goals. 


Take advantage of this opportunity to speak LIVE with a true industry expert in building and deploying workflow apps. Follow Boomi & Steve on Twitter for all the latest Boomi Flow updates & info, and join our live Twitter Q&A on Aug. 29 at 11:30 am PST by tweeting your questions to @SteveWoodWho using “#BoomiFlow”.

Boomithon17, our first Boomi Flow (ManyWho) hackathon on Saturday, July 29th in San Francisco brought together diverse minds from developers to creators to the idea gurus. Creative Innovation was the driving theme as recognized by Steve Wood, "I'm always curious to see what we can do with Boomi Flow. It's awesome to see creative minds build on our platform."


The St. Francis Suite at the Westin St. Francis was filled with passionate people eager to brainstorm ideas and build flows and apps. We kicked off the event with Steve Wood sharing an overview of Boomi Flow and the platform with a demo. Right after, teams were eager to find novel ideas ranging from enterprise challenges to real-life problems.


The teams got together and started to brainstorm ideas for 2 hours and build innovative flows in 3 hours, pushing the boundaries of existing technology solutions. At times, it may seem like ideas are overflowing, but narrowing the idea to a compelling problem is the main dilemma. We had teams spend a majority of the time narrowing down the ideas to a substantial challenge that was worth solving. Once the idea was pinned down, a ManyWho account was created and added to a tenant for all the team members to collaborate together. Building flows so quickly and distributing the effort amongst the team with tutorials and documentation was crucial to each team’s success. It’s always exciting to see how it is used by diverse teams and shows the true value that the offering brings.



The event wrapped up with the demo pitches and winner announcement. 

Our first prize went to the “Apartment Finder”, who took a creative take on helping renters find apartments, similar to Hired - where applicants submit their rental application and owners will reach out to qualified renters. Our second prize winner wowed the judges with their fun twist to “Find an excuse” at situations that are not in your control. With a key phrase like “baby”, you can trigger a text message set by a timer and you can decide if you want a way out and take next steps. 


There are few other projects worth mentioning. The “Neighborhood Watch” was a community portal guiding users to know what’s happening in the neighborhood and submit home improvement projects. The “Design Approval” supported multi-level approval all within the same flow, collaborating across different stakeholders. And to think that these were built in just a few hours with Boomi Flow (ManyWho), I wonder about the endless possibilities we have across our Boomi customers and partners.



Thank you to the participants for their encouragement and valuable feedback on functionality and documentation. The teams expanded our use cases and broadened our thinking, beyond the business needs that we have encountered.


Stay tuned for more! If you are interested to participate in the next hackathon, email us at

To learn more about Boomi Flow (ManyWho), get started here.

In our next Community Champion spotlight, we talk with Sjaak Overgaauw about how he found himself integrating and managing data in the cloud!



What's your take on MDM (master data management)?


Sjaak: I think if you have an MDM hub in the middle, it’s a much better design and saves you a lot of time in building point-to-point interfaces. But most customers don’t think in terms of MDM or in terms of the idea behind MDM. They always look at this problem in the traditional, point-to-point way.


Read the full interview here: The Winding Road of a Dell Boomi Integration Specialist.


Look for more interviews with Community Champions coming soon!

Ever wish you could spin up an Atom in Azure quickly from the Azure portal? Well now you can! Dell Boomi recently released an Azure Marketplace offering that makes installing a Boomi Atom into your Microsoft Azure infrastructure simple. In this post, we are going to explore how you can use it to install an Atom (including all the required Azure resources) from the comfort of your browser in a matter of minutes.



Before starting, you need to have two items:

  1. An AtomSphere user that has the Atom Management privilege in the account where you will be installing the new Atom. If you don't have an account, you can sign up for a free trial AtomSphere account.
  2. A Microsoft Azure subscription. If you do not have one, you will need to create a free Azure account first.


Deploying the Azure Marketplace offering

  1. Log in to the Microsoft Azure portal

  2. Click on the '+' icon in the top left. Use the search box to find and select the Dell Boomi Atom (Windows) offering.

  3. Click Create to begin configuring the deployment.

  4. The first step, Basics, asks you to provide some basic information that is needed to create all the Azure resources. The user name and password you provide will be used to create an administrator user for the new virtual machine. Once you have provided all the information, click OK.

    I have found that choosing to create a new resource group makes it easier to manage all the resources that are created for the Atom.

  5. The Infrastructure settings step asks you to specify a name for the new virtual machine as well as pick how large you want it to be. Because workloads differ from Atom to Atom, picking the right size will depend on what you will be doing with the Atom. Once you have provided all the information, click OK.

  6. The next step, AtomSphere User Information, is where you provide all your AtomSphere information. The user name and password provided here are the credentials that will be used to install the Atom. The AtomSphere account ID is the ID of the account the the Atom will be tied to. The value you provide for Atom name will be displayed in AtomSphere. Once you have provided all the information, click OK.

    Remember, the AtomSphere user must have the Atom Management privilege in the account where you will be installing the new Atom.

  7. The Summary step is just a validation step. Verify that the information is correct, that validation passes, and then click OK.

  8. The final step, Buy, presents you with the Terms of use. Once you have read through it all, click Purchase to start the deployment.

  9. That's it! Now you can kick back and watch the deployment run....

  10. Once it is done you can review the deployment log.

  11. At this point your Atom is up and ready to run your integration processes.


Wasn't that easy? In under 10 minutes you were able to spin up a new Atom running on its own dedicated virtual machine in Azure, without having to leave your browser. Once you have tried it out, let us know what you think. Are there additional options you like to see in the next version of our offering? If so, I would love to hear about them!

UPDATE 7/28: 

See you at the hackathon tomorrow at 1:00pm PT.


1:00 PM - 1:30 PM Check-in
1:30 PM - 2:30 PM ManyWho (Boomi Flow) Introduction [Appetizers and Beer]
2:30 PM - 3:00 PM Hackathon Kick-off | Create Your Team
3:00 PM - 8:00 PM Implementation: Brainstorm and Plan (2 hours), Build your App (3 hours) [Snacks]
8:00 PM - 9:00 PM Demo Pitch | Judge Evaluation [Dinner]
9:00 PM - 10:00 PM Winner Announcement


Step 1: Sign up for a free trial account - 
Once your account is registered, you can invite your team members to the Flow/Tenant and collaborate on projects.
Step 2: Check out ManyWho docs to get started

Step 3: Your possibilities are endless.. go for it!


Prizes: (*Max. of 4 participants in a team)
1st Place: $500 Amazon gift cards for each team member*
2nd Place: Raspberry Pi Starter Kits for each team member*


UPDATE 7/19: 

Our 10-day countdown starts today! Few spots left - Register Now!

Attached the information packet for those who are interested in checking out ManyWho (Boomi Workflow).


Boomi invites creators and developers to converge to solve enterprise problems, and push the boundaries of existing technology solutions. Join us on Saturday, July 29th to connect, build enterprise apps and meet the CEO of ManyWho, Steve Wood.


Build Enterprise Apps in 3 hours - Think about the endless possibilities!
Salesforce/Twilio/Box apps, messaging apps, chat bots, Text-to-Speech assistants, offline mobile apps and more.

Prizes: (*Max. of 4 participants in a team)
1st Place: $500 Amazon gift cards for each team member*
2nd Place: Raspberry Pi Starter Kits for each team member*


Register now and join us at the Westin St. Francis in San Francisco on Saturday, July 29th.

ManyWho (Boomi Workflow) - Enterprise Apps Hackathon 2017 


To learn more about ManyWho, check out Getting Started with AtomSphere and ManyWho!

Today, Boomi announced Spring 2017, highlighting key platform innovations made generally available to customers in the 1st half of this year. Many of these enhancements and new features were requested by the Boomi Community (thank you for your continued support!). 


Spring 2017 is categorized into 3 key functional areas, and spans different products across the Boomi platform:

  • Data Governance and Security: Ensure that data meets business needs as well as compliance and regulatory requirements
  • Integration Accelerators: Increase IT productivity with pre-built integrations and reusable components
  • DevOps Automation: Provide enterprise-scale delivery and deployment to reduce the complexity of IT operations 


Learn More

  • To learn more about these new Boomi platform capabilities, read our blog.
  • For more details and deep dives on past releases, click here.


Save the Date for the First Annual Dell Boomi User and Partner Conference!


Mark your calendar for Boomi World 2017 — the first annual Dell Boomi User and partner conference September 20-22 in San Francisco!


Click here to add to your Outlook calendar


By popular demand, our inaugural Boomi World will bring together the global Boomi community to learn, network and share best practices on using Boomi’s #1 iPaaS to build a connected business.


Click Here to be alerted when registration opens later in June and take advantage of early bird discounts. Join us for a three-day extravaganza at the Westin St. Francis on Union Square that will help you make the most of Boomi to digitally transform your organization:


  • Learn best-practice techniques and the latest iPaaS innovations
  • Network with peers, analysts and Boomi executives, technologists and partners
  • Share your insights, successes and feedback on Boomi iPaaS


Who Should Attend

CIOs, IT leaders, integration developers and application owners charged with accelerating innovation by connecting applications across the enterprise.


Stay tuned for details of our jam-packed agenda including informative breakout sessions, illuminating keynotes, case study presentations and hands-on training, starting with Boomi Certified Training on Wednesday, September 20.

Sign up for early bird registration


To learn more, visit or email us at