Database Integration Guide

Document created by rich_patterson Employee on Feb 22, 2016Last modified by sheng_liao462475 on Jan 8, 2018
Version 15Show Document
  • View in full screen mode

The AtomSphere Database Connector allows for seamless integration with a variety of on-premise and cloud database providers.



User Guide Articles

Here are some links to our User Guide and other reference material, which you may find useful when using and configuring the Database connector.


Process Library Examples


Common Scenarios


Scenario 1 - Connecting to a Database

The Database connection reference guide page lists the available fields and their usage in the Connection component. In this example, we will be connecting to an Oracle database.


Here is a link to Oracle’s reference guide. Table 8-1 lists standard database connection properties. Typically you will need databaseName, user, password, portNumber and serverName at a minimum, however certain configurations require additional information.


Scenario 2 - Querying Records

Setting the Grouping Options in the Database Read Operation is critical for most database integrations. There are no database record splitting options available within the Data Process step mid-process, so you must define the operation's Link Element and/or Batch Count to group records by a matching field or record count.



Database Read Profile Statement:



Return Data (snippet):

12346|^|1|^|NETWORK CARDS


If you want to map individual logical orders to a destination system, you should batch the orders in the Database Read Operation



Link Element = ORDERNUM (Element definition from Profile)

Batch Count = 1 (Meaning 1 unique ORDERNUM per batch)



Document 1:



Document 2:

12346|^|1|^|NETWORK CARDS


Setting the Max Rows option is helpful when you want to limit the amount of SQL records entering the Process. If you are building a process that is migrating a large data set to another system, you will ultimately want to prevent the same records from being sent repeatedly during each Process execution.


In order to prevent this, you can try one of the following options:


  • Add a final branch into a SQL Type Program Command step at the end of the Process flow to update each record based on its record ID
  • Add an SQL Update map function into one of the main Data Maps to perform the update per record
  • Add a new map and Database Write Connector that builds and executes the SQL update


Example Program Command Script:



Example Parameter:

Profile Element > Database > Orders Select Profile > OrderNum Element


Scenario 3 - Sending Records and using the Commit Options

Setting the Commit Options in the Database Write operation is helpful when wanting to finalize the execution of SQL statements in groups or by Data Sets for multi-table relationships such as Header/Detail or Parent/Child.



Operation Configuration:

Commit Option: Commit by Profile

Batch Count: 1


The goal is to commit each unique order/orderdetail instance uniquely, in order to protect against connection failures mid-process. By using the commit options, instead of the entire order insert failing, the main order record may be committed, and a portion of the detail records.



Common Database Connection Configurations



When using the MySQL DB Connector, you’ll need to download and install the mysql jcbc driver in the ‘lib’ folder of the local Atom folder within the ‘Boomi AtomSphere’ folder.  (Because of licensing issues, we are not permitted to bundle this driver in our Atom.)

  1. Download the driver.  As of this date, the driver can be obtained from the following website:
  2. Upon downloading and extracting the folder, the only file you’ll need is the following:
    1. mysql-connector-java-5.1.18-bin
    2. Copy this file and paste it into the ‘userlib/database’ folder of your locally installed Atom:
      1. For example:  C:\Program Files (x86)\Boomi AtomSphere\Atom - Boomi_Local_Atom\lib\
      2. If the directory does not exist, create it, and place the file there.  You will then need to restart your atom to pick up the new folder.
  3. When creating the Database Connection, for Database Type, select MySql


Oracle 12c instance

For Oracle 12c you will need to install its ojdbc driver as a custom driver as described here:


All ojdbc drivers can be found here:


Oracle Real Application Clusters (RAC)

You have verified the database name, server, username and password and port number are correct, but you may receive the following error: "Listener refused the connection with the following error:

ORA-12505, TNS:listener does not currently know of SID given in connect descriptor The Connection descriptor"


With RAC, the connection URL needs to point to the appropriate SERVICE_NAME for the database versus the individual database instances as identified by the SID.  The URL format is slightly different, therefore, you should use the custom database type for this.


SID oriented URL:



SERVICE oriented URL (suitable for RAC):



In the second example, you may substitute the SID for the SERVICE, and the connection should still work. Also, please make sure there are no extra spaces in the connection URL or service name.


In order to further verify the connection URL and that there are no firewalls or connectivity issues, use another tool (such as SQL*Plus or Toad) to connect to the DB outside of AtomSphere. If you receive a "TNS:listener" error, there is some problem identifying the database that you want to connect to.



Connectivity to PostgreSQL is performed with the generic Database Connector with Custom Database type:


  1. Download the PostgreSQL JDBC Driver:
  2. Copy the driver jar into the local Atom's ../<atom_install/userlib/database directory.
  3. Restart the Atom:.

  4. Configure the Database Connection as follows:


Database Type:   Custom

User:            <postgresql username> 

Password:        <postgresql password> 

lass Name:      org.postgresql.Driver 

Connection URL:  jdbc:postgresql://hostname:port/databasename 


To import the Database profile in the Database operation for the Postgres tables, import using a local atom, or create the profile elements manually.



Amazon AWS RDS PostgreSQL using an SSL cert from the Boomi Atom Cloud

Dell Boomi has imported the AWS certificate into the keystore for the hosted Dell Boomi Atom Cloud. To make the SSL connection to AWS Postgres instance users will have to append the parameter ssl=true to their connection string.


Example connection string is shown below:


Amazon AWS RDS PostgreSQL using an SSL cert from a local Atom

  1. Place the JDBC driver in ATOM_HOME/userlib/database directory (create one if not present). You can download the driver from
  2. The Amazon RDS Root certificate needs to be installed into the java keystore. The certificate provided by amazon at contains multiple certificates You will need to extract the Amazon RDS Root CA cert and place this into the keystore.
    1. The first step is to convert the certificate to DER format what Java understands:
      openssl x509 -outform der -in certificate.pem -out certificate.der
    2. Next import the certificate into java keystore:
      keytool -import -alias your-alias -keystore cacerts -file certificate.der
  3. Have the connection URL in database connector as jdbc:postgresql://<instance_end_point>:<instance_port_no>/<db_name>?ssl=true
  4. The connection now uses SSL for communication with PostgreSQL DB instance.


There are several ways to verify if connection is SSL:


  1. Set in ATOM_HOME/bin/atom.vmoptions. Restart the atom.
    On running the process you can now monitor the SSL communication along with the SSL Handshake being performed.
    When providing a keystore which does not have the right certificate you can see the following error:

[ handleException] ASyncMPollExecutor-thread-1, handling exception: PKIX path building failed: unable to find valid certification path to requested target 

  1. Another way to verify is as mentioned at:
  2. Create a process where the operation is to get the value of ssl_is_used field. If this value is returned t (true) the connection is SSL.
  3. Also, when you do not put any SSL parameters in the connection URL the connection would successfully take place; only it would not be using SSL. This can be verified by observing in the way specified above.


Microsoft Azure Database

Once you have access to the MS Azure DB, bring up the examples on how to connect, and review the connection string for JDBC. It would look something as follows:




  • A more detailed helpful link:
  • Make sure the firewall rules on the Azure side are updated with the incoming IP addresses. If the IP addresses are not added, there will be errors such as, access denied, unknown connection, connection refused, unknown server, invalid or unknown username and password, etc.
  • Based on the java version used, copy the appropriate sqljdbc<4>.jar in the <atom_install>\userlib\database directory on the local server(s) and restart the atom(s)
  • If you get an error retrieving db data: Network error IOException: Caused by: Connection refused:
    • Make sure the ips, proxy and firewall settings allow connection from Azure to the platform.
    • Enable connection pooling
    • Configure DB to use port 1433


AS400 Toolbox using JTOpen.jar

Have designed the following connection:

URL : jdbc:custom:/hostname:port

Class Name:


When I try to import the table or tables, get a few errors:

database meta-data is null


URL for the default library is missing



In order to connect to the AS400 Toolbox with a jdbc driver:

  1. Download and apply the jt400.jar and/or JTOpen.jar to the <atom_home_installation>\userlib\as400 directory
  2. The tables in AS400 may not be available in the default library or default schema, so the connection URL needs to be adjusted:
    1. jdbc:as400://<hostname>:<port>/library='mylibary'
    2. OR
    3. jdbc:as400://<hostname>:<port>/schema="MySchema"

Here are some helpful links: may also need to install the following into <atom_install>\userlib\as400:

There are two types of DB2 JDBC Drivers are available.

  1. The "Native" JDBC driver.
    2. Sub protocol:db2
    3. Ex.jdbc:db2:localhost/COLLECTIONNAME
  2. The "Toolbox" JDBC driver.
    2. Sub protocol:as400

Source: driver to AS400:

For further diagnostic, we suggest using the Squirrel sql Client tool to check connectivity. If that works and is able to get data, then you can use the same connection string.Just added new tables to my AS400 database. Why they are not visible to Boomi?

  1. Check the privileges and the permissions.
  2. Use the SQuirel Client (SQL tool) to access and view the new tables
  3. Check the schemas (library) for the new tables. By default the new tables are created under a default schema or library.


Microsoft SQL Server Express via JDBC

  1. Use the Database Connector configured with the SQL Server (Microsoft) Driver Type:
  2. From vikdor: Connecting to SQL Server Express through JDBC:

Configure TCP/IP communication with SQL Express as follows:

  1. Open SQL Server Configuration Manager
  2. Go to SQL Server Network Configuration -> Protocols for SQLEXPRESS
  3. Set the status of TCP/IP protocol to "Enabled" (if it is already not)
  4. Open Properties window for TCP/IP, go to IP Addresses section
  5. Go to the bottom of this property page and set the TCP Port under "IPAll" to 1433


Microsoft Access via JDBC

  1. Download the UCanAccess open source JDBC driver for Microsoft Access. Look for the latest file.
  2. Unzip the following files into the Atom's ../<atom_install_root>/userlib/database directory (create the directory if it does not exist):
    • (Note versions may differ)
    • ucanaccess-3.0.6.jar
    • lib/commons-lang-2.6.jar
    • lib/commons-logging-1.1.1.jar
    • lib/hsqldb.jar
    • lib/jackcess-2.1.3.jar
  3. Restart the Atom. 
  4. Create a Database connector as follows:
    • Type: Custom
    • Class Name: net.ucanaccess.jdbc.UcanaccessDriver
    • Connection URL: jdbc:ucanaccess://c:/YOUR/ACTUAL/PATH/TO/ACCESSDB/ACCESS.MDB;memory=true


ODBC for Windows Applications, Microsoft Access

A JDBC-ODBC Bridge can be used to allow Boomi to access databases which do not have a native JDBC driver. Below, we will use Microsoft Access as an example.


Commercial JDBC-ODBC Bridges are available. However, these instructions are for Sun's free JDBC-ODBC bridge which is packaged with Java.

Note: Sun's free JDBC-ODBC Bridge may not meet your needs; please read all of the disclaimers on Sun's website before deciding to use this JDBC-ODBC Bridge:

Setting up a DSN

To set up an ODBC DSN go to: Control Panel > Administrative Tools > Data Sources.


The DSN must be accessible to the Atom when logged on as the "Local System Account" or the specific user is set up in the "Log on" tab of the Atom service. (Control Panel >  Administrative Tools > Computer Management > Services > ATOM_SERVICE_NAME).


Consult your database specific documentation, or Windows Help, for assistance in setting up your DSN.


Creating a Boomi Database connection:

Go to the Connections tab under the Database Connection component and configure the following information:

  • Database Type -> Custom
  • User -> Boomi
  • Password -> xxxx
  • Class Name -> sun.jdbc.odbc.JdbcOdbcDriver
  • Connection URL -> jdbc: odbc:ODBCCONNECTIONNAMEHERE


Common Errors


Error: You are trying to import an SQL operation from SQL Server and the dialog just hangs there with the words, “Connecting to atom”

Make sure that the SQL user you are using to connect with has both read and write permissions to the database.


Error: Errors occurred while building or executing SQL statement: -999 :: [ERROR: The datediff function resulted in an overflow. The number of dateparts separating two date/time instances is too large. Try to use datediff with a less precise datepart.];

Doing select by date from a Stored Procedure receive the above error.


This error indicates that in your Stored Procedure you are comparing 2 dates that are too far apart based on the units you have specified in the stored procedure.  It could be that you are comparing a current date to some default date of 01/01/1900 00:00:00:.000 and the difference between the two dates is too large a number units as specified in the Stored Procedure, to be handled.  The first step to resolving this error would be to review your data and search for data anomalies, then review your Stored Procedure and evaluate the units specified in the data comparison.


Error: When setting up a DSN ODBC custom database, received error: Unable to extract database meta data : Null

  1. Verify the database driver jar file(s) is placed in the ../<atom_install/userlib/database directory.
  2. Verify the Java class name is properly defined in connection string.


Error: Dynamic Update failed to update records in the database although there is no process error

In the Update database profile, AtomSphere imports all database fields as Conditions by default. So every field in the database table will be listed as a condition in the Map shape. We had to manually delete all fields from the Conditions list except for the unique identifier.

Error: When using JDBC-ODBC: Error retrieving db data: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (com.boomi.connector.ConnectorException) Caused by: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified ((java.sql.SQLException))

Troubleshooting steps:

  1. Are you able to connect using the same ODBC DSN name outside AtomSphere?
  2. Can you ping the database server?
  3. Make sure the proper version of the ODBC is used. In this case the 32bit ODBC version was used to connect successfully.
  4. Confirm the architecture (32bit vs. 64bit) matches between the Atom's JVM and the driver.
  5. Helpful link: 


Error: No data is being inserted into the database

The process is designed to insert data into a Database, and executes successfully, but there is no data in the database. What could be the problem ?

  • Make sure that there are no duplicate database profiles with the same name. The target database profile from the map(s), matches the database connector profile, not only by name, but component id (and physically).


Error: When performing a dynamic update, and passing multiple documents to the database connector/operation, all records in database are incorrectly updated with the data from the final document passed in.

You need to add condition to the database profile to use a unique identifier so that only the database records associated with the inbound document are updated.

  • Make sure that you map an appropriate value to each of the elements in your condition segment.


Error: java.lang.UnsupportedOperationException: Database type does not support cursors

You are getting the error above when invoking a database stored procedure from a Boomi process.

The stack trace in the logs may look like below:

Caused by: java.lang.UnsupportedOperationException: Database type does not support cursors.
at com.boomi.connector.database.types.SqlParameterTypeFactory.getCursorType(
at com.boomi.connector.database.types.SqlParameterTypeFactory.getParameterType(

Here are some troubleshooting steps to try to resolve this error:

  1. Run the stored procedure outside of Boomi in a tool like SQLDeveloper to see if the error is produced there.
  2. Ensure that the version of the Oracle JDBC driver supports Oracle Ref cursors.
  3. Modify the IN / OUT parameter types and ensure they are configured as expected for the stored procedures.


Error: Errors occurred parsing result set. ((com.boomi.connector.database.reader.ResultSetParseException)) Caused by: Invalid column index nn. ((java.sql.SQLException)

You have a process that contains a SQL select and you are receiving the above error.

  • If the process is deployed, open the error message by clicking on the date/time stamp in process reporting, and then clicking on the error message.  This will bring a slide out page that indicates where the error occurred.  Open the profile component, and look at the statement and compare it to the number of profile elements.  The select should include every element that is part of the profile.  (In other words, if there are 12 elements in the profile, the select statement should be selecting 12 columns from the table.)


Error: Errors occurred while building or executing SQL statement: ORA-00911: invalid character ; Caused by: ORA-00911: invalid character

This error typically occurs when you try to execute a SQL statement  that included a special character.




  1. Execute the SQL statement in a third party tool such as SQUIRRELSQL client outside Boomi.
  2. Use a Program Command shape instead of a connector.
    1. Specify SQL statement as Type
    2. Use the appropriate connector to attach to the database
    3. Enter the SQL string WITHOUT a semicolon
    4. Save and rerun the process

Reference Guide URL:


Error: Error retrieving db data: Index: 28, Size: 28; Caused by: Index: 28, Size: 28

This error often occurs when the number of columns that are being returned does not match number of fields that are in the profile.


Error: Database: I/O Error: Connection reset; Caused by: Connection reset

This error often occurs on the DB connector, but may appear for other connectors as well.  If it occurred on a Database Connector, there will typically be an entry such as the following:

Timestamp INFO Connector name: database Connector; DB Operation: Executing Connector Shape with # document(s).
Timestamp INFO Connector name: database Connector; DB Operation: Shape executed with errors in # ms.

  • If the error "connection reset" happens intermittently or very quickly during the DB operation (within milliseconds), this type of connection reset is typically an issue at the network layer or database layer. Review this error with your database admin, or network admin.  They may need to enable a trace on this DB connection and monitor it at a lower level.
  • If the error occurs more consistently (with every execution), there may be a firewall or DB configuration issue preventing the connection from completing.
  • If the error occurs during a very long DB operation (minutes or hours), you may need to batch your data in smaller amounts in each execution and integrate the data across more frequent executions to minimize the time that is spent in open DB connections.


Error: Unable to open database connection. Io exception: Got minus one from a read call

This error indicates that there is an issue on the database side.  It may attempt the connection for a period of time before eventually failing.


Possible reasons for this error could be a lack of available sessions on the database side, or that the maximum number of database connection sessions may have been exceeded.  The database logs should be analyzed for errors that would contain more specific details. Also, increasing the database process parameters may help. Please consult with your Database Administrator regarding this error as they will need to look at the current database settings and evaluate impacts and possible changes.


Here are some related threads online that describe possible options specific to Oracle or jdbc:


Error: Database Connection Time Out

You will most likely need to verify that the host and port you have specified in the AtomSphere Connection component match your database server settings.


You can test these settings by using a query tool outside of Atomsphere to try establishing the connections.  Once you are able to establish the connection with an external query tool, make sure to use the same connections within the Atomsphere Connection.


Error: Unable to open database connection. ORA-28000: the account is locked (com.boomi.process.ProcessException)

The error indicates that the DB is locked out due to failed login attempts.In order to unlock it, you would need to use the following command for SYSTEM user:

alter user system account unlock;

Please contact your server/ DBA admin who would have the necessary privileges to access the DB.

Additional Reference links:




How can I call a Stored Procedure?

The best practice method for calling a stored procedure is dependent up how the stored procedure is being called, by using either a map function or a database connector:

  1. The stored procedure can be called from within a map function by selecting the Lookup type and then select SQL Lookup and Stored Procedure.
  2. The stored procedure can also be called from a Database Connector by selecting it as the Statement type in the Database Profile component.

The key difference between these two methods is the database connector supports stored procedures but does not support output parameters, while the Map Function SQL Lookup does. The database connector requires the response to be provided via a returned result set.


Therefore, if the stored procedure has both input parameters and output parameters, it will be necessary to use the Map Function Lookup SQL stored procedure call to handle the data as output parameters.  If the stored procedure has input parameters, no output parameters and returns a result set; you can use the database connector to execute the stored procedure using a DB read profile and with the fields defined to match the elements returned in the result set.


As described in this reference guide article (Database Profile), the following standard data type elements are supported by the database profile, including for result sets:

  • Character
  • Number
  • Date/Time
  • CLOB
  • BLOB

(Local database driver-specific data types may not supported.)

Prior to the May 2014 release, the Oracle type CURSOR was not supported.


Why is only one record picked from the database?

There are 10 records in the database. The process should pick all of them, and process them one by one. Instead the process picks only one of the records on each individual run. What can be done to retrieve or pick all 10 records in one shot?

  • Set the Batch Count to 1 if you do not use any link elements. If you use a link element, than change or set the Batch count to 0.


How can I connect to a DB through an SSH tunnel?

If you configure the SSH connection (tunnel) outside of AtomSphere (perhaps with a third party tool) and there is a valid SSH connection between the Atom server and the target DB, then the DB connector can also be configured to use that tunnel.


Some more information can be found about the setup here. Note that this is just for an example: Access Your Database Remotely Through an SSH Tunnel – Engine Yard Developer Center 

How do I pass parameters to an SQL script/statement?

In order to set a dynamic parameter in SQL script, you must use the '?' symbol. The question mark will bind positionally to the parameter listed in either the Profile or Parameters list (Program Command, Decision, etc.). Note: the substitution occurs based on the order of the parameters, not based upon their names.


If two placeholders are specified in the SQL statement by the use of two '?', two parameters will need to be defined to pass values into the query.  There must be a one-to-one correlation between parameters and these question marks.


What are some Best Practices regarding Retry Schedules?

When setting and deploying a Retry Schedule to automatically re-run failed documents, it is important to set the proper Batch Count in the Database Read Operation. You will want to set a Batch Count of 1 to treat each record as an individual document and/or apply the proper Link Element. If a document/record fails for this case, the retry schedule will only re-run the failed instance vs. re-running the entire database record set.

How do I Commit by number of rows option in a Database operation?

Example scenario:

On my Oracle database connection, I have the commit option set to commit by number of rows with a value of 1000. When I check my database, I see that the rows are being committed one by one. Why is this? The data is being retrieved from Salesforce


Each database document produced by the map will be executed by the database connector individually. So, if a thousand documents each containing a single row are sent to the database connector with a commit batch level of 1000, the rows will be committed one at a time, not in a single transaction of 1000. The map can be changed to produce a single database document by selecting "Batch Results" option in the Salesforce query options. This will produce a single XML document containing a List of all the Salesforce objects. The Salesforce XML profile will have to be modified manually to match the batch output structure. Basically a new root List node will be added and the existing root object node will now be a child of the List node.


I need to insert data into my third party database, how do I configure a SQL Insert INTO statement?

The fields listed in the INSERT INTO should be in the same order as the fields in the profile.

Using the following example insert statement:

INSERT INTO ObjectName (FieldName1, FieldName2, FieldName3)
VALUES (?, ?, ?)

Please note, the Field listing in the Profile should be:

  • FieldName1
  • FieldName2
  • FieldName3

Differences in SQL syntax exist between the different database vendors. Your SQL statements will be parsed by the database you are connecting to therefore, refer to the documentation provided by your database vendor for syntax details.  We also recommend developing and testing queries using a SQL tool such as SQuirrel SQL Client ( )

Why doesn't the Database Profile import wizard always generate the schema name?

A Database Profile is generated with the Import Wizard per the steps from the reference guide article, but the generated profile does not include the schema associated with the database table.


The schema is not detected by the import wizard when it browses the metadata about the table. If the database requires the schema to be defined by the user executing an SQL statement against the table, you will need to manually modify and define the schema after generating the database profile.


After AtomSphere generates the SQL statement, you can, and in most scenarios, should, edit the SQL statement.  The auto-generated statement is a simple "select all" query and does not include all details. Table joins, where clauses, parameter fields, etc., may need to be added as well if required.


Does parent-child table relationship applies only to database level?

Does parent-child table relationship applies only to database level? Does it also apply to parent-child SQL insert statements?


According to our reference guide article, it does apply to parent-child :SQL statements as well.


How can I call a long running Stored Procedure without holding up the completion of the process?

You Have a process that calls a stored procedure via a Program Command. The Program Command runs for an exceptionally long time under normal circumstances. The results of the stored procedure are not used down the line within the process execution, and you want to configure the process to run to completion without waiting for the Stored Procedure to complete.


A solution to this issue is to add a branch to the process, with 2 legs.

  • On branch 1 add a sub process call configured to not wait for process to complete, and in the sub process add the Program Command to call the stored procedure.
  • On Branch 2 add the remainder of the process functionality.

This will allow the process to call the stored procedure, but then continue to completion without waiting for the Stored procedure to run to completion.


How can I verify that a database operation inserted or updated the data?

After configured to perform an insert or update, the Database operation returns only success or failure status, but you would like to verify within the process that the database operation inserted or updated the data.


Here are two possible options to check that records were inserted or updated:

  1. Add a *New* DB connector after existing DB connector operation that performs the insert or update.   Configure the new DB Connector to perform a query against the target table to query the records based on key information and check that they were created or updated appropriately.
  2. Create a stored procedure that inserts or updates the data and then returns evidence of the record data back as either a result set or as output parameters.   Call the procedure from Boomi with inputs, and outputs either from the result set or from the output parameters.  Refer to Boomi articles and reference guidance on best practices for invoking stored procedures using Boomi.


How do I insert an XML file in a database table?

For example:

A process that retrieves shipment orders data from a database table and maps that data to an XML profile.The goal is to get the data in an XML file and insert that file into an XMLTYPE column in a table (Database is Oracle)

  1. Read the shipment orders from the database using a Database connector defined with a DB profile.
  2. Configure a map to translate DB profile to XML profile.
  3. Use a Set Properties shape to temporarily store the entire XML document in a dynamic process property (i.e. use the Current Data parameter type).
  4. Configure a second map to translate XML profile to a new DB profile containing the insert/update statement. In this map, use a 'Get Dynamic Process Property' map function to retrieve the XML data and map to the desired destination column.


How can I improve the performance of my DB connection?

The following bullet points may address your concerns:

  • Is the Atom local to this DB connection, or is it on a remote server?  If the Atom runs on the same server, make sure to use a local address when possible.
  • Consider removing the check box from the “Test Connection When Borrowing From Pool“  options if possible.
  • Work with your DBA to determine how many connections the DB can handle at a given time.  Specify this as "Maximum Connections", instead of letting it be unlimited.  It may be easier to diagnose DB issues if you use specific values in the DB connection fields.  For example, MS SQL server may allow 1000s of connections, but you may choose to limit it to 10 during diagnosis.
  • Consult the DBA regarding the other DB connection settings as well…  "Min Connections", "Max Idle Time", etc… Some typical values to try might be "Max Connections": 10, "Max Idle Time": 10 seconds, "Max Wait Time": 5.
  • Consider adding “Additional Options” if required..  for example:
  • Consider rescheduling processes to not run concurrently with other processes using the same DB connector.
  • Have the DBA examine/optimize select statements.


Why isn't my Database connection pooling working as expected?

In the Database Connection component, on the connection pool tab, select at least one of the Connection Verification options and also supply a Validation Query.

*Please note that selecting a Connection Verification option without supplying a Validation Query will not provide the connection pooling verification selected because there is nothing available to verify the connection. For example, selecting Test Connection When Borrowing From Pool and not supplying a Validation Query will allow stale database connections to be selected from the pool.

  • Test Connection When Borrowing From Pool will verify that the connection is still valid when taking it from the pool.
  • Test Connection When Returning From Pool verify that the connection is still valid when adding it back to the pool.
  • Test Idle Connections will prompt AtomSphere to occasionally verify that idle connections are still valid.
  • Validation Query is used to determine whether the connection is valid and is used in along with the selected test connection settings. It is recommended that the query be a simple SQL statement that will return a single row of data.

The settings for the database connection pool in the database connection Connection Pool tab should be configured according to the specific atom, molecule or cloud environment. Different settings should be considered for each individual environment's connection pool. Please consult your internal Database Administrator for advice on these settings as these are environment and database specific settings.


Provided no other issues are occurring in either the database or with the connection pool configuration, the atom should not need to be restarted when changes are made in the connection pool tab.

How do I configure Database Connection pooling on an atom to gracefully handle a database restart?

A process has a database connection with connection pooling enabled per the guidance at this link:


When the local database server has issues and goes offline, the atom hangs and does not re-establish the connections when the database server comes back online.  A restart of the atom will correct this issue, but is not necessary, when connection pooling is configured to verify the connection.


In the Database Connection component, on the connection pooling tab, select "Test Connection When Borrowing From Pool" from the Connection Verification options and also supply a Validation Query. Please note that selecting a Connection Verification option without supplying a Validation Query will not provide the connection pooling verification selected because there is nothing available to verify the connection.


Provided no other issues are occurring in either the database or with the process, the atom should not need to be restarted and stale database connections should not be selected from the pool.


How do I configure a Standard Database Update with a WHERE clause?

The parameters depend on the order of the fields in the profile, not on the field names themselves. The fields that will be included in the where clause should be at the end of the list.  Using an example where "Id" is the field to be used in the where clause, the update statement should look like this:

FieldName1 = ? ,
FieldName2 = ? ,
FieldName3 = ?
Id = ?

and the Field listing in the Profile should be in this order:


Be sure not to include the where clause field (e.g. Id) in the SET part of the statement. (placing the Id in the SET part of the statement may result in all records getting updated).

The Id = ? should only be in the WHERE clause part of the statement.


How to retrieve a DB integer value and write it out as decimal?

Col3 in the following query has a value of 123456789l but you want the value to print as 1234567.89


select col1, col2, col3, ..., col100
from table


Using the number precision format ( number mask: ####.## ) does NOT work. You can change the source data of course, or you can change the select query to convert the actual value of the column into a decimal:


select col1, col2, ROUND( (col3 * 1.00)/100.00, 2), ... col100
from table

How to implement multiple SQL statements in Boomi where the second statement depends on a result of the first statement?

In  this scenario, you may have multiple SQL statements and the second statement depends on a result of the first statement.

For example, here are two SQL statements that have a dependency:


In this example, a variable @temp set to the last insert ID in table1 will be used during the insert in table2.


How can this be implemented in Boomi?


The use of multiple SQL statements in one DB profile is for simpler scenarios.  For this scenario, you can break these statements up into multiple Database Connector Operations.  Here is an approach using the example above:


  1. In the first Database Connector, create an insert operation that will perform an SQL insert with this statement:
  2. After this Database Connector shape, create a second Database Connector shape and configure a Query operation to query LAST_INSERT_ID.
  3. Add a Map to map the data returned from query #2 into the database profile format needed for step # 4.
  4. Create a third Database Connector shape that performs the insert statement for:
    where the first Field value in the database profile is set to the mapped value of LAST_INSERT_ID from the prior query.


If you need to pass an auto-incremented ID of one table to another table you can perform the approach above, which is basically 3 separate database connectors 1) insert in the first table to create the auto incremented id 2) query the table to get the created id 3) send that id into the third connector to update the other table.


Frequently asked questions about the above scenario:


  1. If there are multiple records processing at the same time, will the records be handled in sequence?  (You wouldn’t want the query at step 2 to return a different last insert ID if there are multiple records processing at the same time…).
    • If sequencing of the data is an issue, then try either batching the data at the source to execute 1 record at a time or add a Flow Control shape set to Run Each Document Individually to run each document through the shapes individually. Keep in mind that this may slow down performance. So it may depend on the type of volume this process will handle.  Determine if this process will be handling high volume or small amount of records per execution and how frequent will it run.
  2. How are the database connections managed? Will each connectors be using the connections from the common pool?
    • Database connections are managed depending on whether or not you are using the Connection Pool tab in the Connection component. Are you planning to use the Database Connection Pooling tab on the Database Connection component?   If not, a new connection will be established for each connector.  If you are using that tab, then the connection will attempt to grab a connection from the database connection pool per the configuration settings in that tab.
  3. Can Boomi execute multiple SQL statements defined in a single script under one statement in a DB profile instead of using multiple DB connectors?
    • Boomi is flexible, and multiple SQL statements can be implemented in a single DB Connector / DB Profile, but it really depends on what those SQL statements are designed to do, so every scenario may not be possible with just one DB Connector / DB profile.   If you are unsure about your scenario, try it out and observe the results.  If unsure, break up the SQL scripts into simpler SQL scripts first to make sure the functionality works separately. Then try combining the simple SQL statements into one SQL statement in Boomi to see if it produces the same desired result.  If it does not, then you may need to go with separate connectors/profiles each with their SQL statement.  Or if you have multiple SQL statements in an SQL script, you may also want to consider developing and invoking a Stored procedure if that is an option for your database type.


When a Database element has an Empty value, why is an Empty String getting inserted into my Database instead of a NULL ? ( or visa versa )

Industry wide, different database types may treat empty strings differently.

AtomSphere handles empty string and NULL similar to how Oracle does.  Here are some considerations if you expected to have a NULL value inserted (not an empty string):

  • Find out what is the impact of inserting a NULL versus an Empty String.  Maybe there is no actual functional impact - the end user / application may not see any difference between NULL versus Empty String so it is important to find out if this will be an issue for the end user/application.
  • If you can use a simple stored procedure, pass a special literal value (e.g., "*MyTemporaryEmptyString*") to a stored procedure and make a one line change to the Stored Procedure to interpret "*MyTemporaryEmptyString*" as either an empty string or a NULL.
  • Implement Custom Scripting to convert the empty string or NULL to the desired character value.  Further investigation may be needed to validate if this is a viable approach.
  • Submit a question or browse our Answers forum to see if others have implemented a solution for this issue for integrating with their database.


My inbound data no longer has value in a particular element, when updating a database using a Dynamic Update, why doesn't the existing Database value for the element get replaced/removed?

A Dynamic Update statement type allows you to simply define the table and fields that you would like to update.  Dell Boomi AtomSphere will dynamically generate the update statement based on these settings and the data that is mapped or passed,  into each of the fields. If there is no source data to send into a particular field, the field definition and value will not be included in the actual update statement.


If you want to update database elements based on an inbound record and include the elements that do not contain data as part of the update, you will need to use a Standard Database Insert/ Update/ Delete.


Can the table name for a DB profile SQL statement be set dynamically?

Table names in a Database profile SQL statement can NOT be set dynamically.


SQL Statements in DB Profiles are handled as "Prepared Statements", and usage intent is akin to the following programming reference:


In this context, parameters are used for field/value pairs, where the parameters are set to values that correspond to fields in the SQL statement.


My MS SQL function takes three parameters, how do I pass one of those parameters as a dynamic value?

  1. Use a connector step to create a Database Connector
  2. Select a Get Action
  3. Operation profile should have the parameters listed
  4. Check and change the options from write to read, so the parameters list is enabled
  5. Add the parameter(s) required
  6. Have the following for the SQL statement
  7. declare @interval numeric
    set @interval = ?
    insert into test_table () values ('test 1',dateadd(HOUR,@interval, current_timestamp) )
  8. Parameters tab for the Database connector can be adjusted to have a static value or a document property to generate and supply a value to the parameters inside the database function.


How to use comma separated value list in a SQL query using an IN clause?

When using a database connector with a list of values as parameters, Boomi keeps passing ? into the SQL query instead of the actual parameter.


Here is the workaround solution to using comma separated value parameter strings in MSSQL/Oracle query using an IN clause; based on the following found on the web:


Note: The following steps were done in Microsoft SQL Server 2008R2 Standard Edition 64-bit using the Microsoft SQL Server Management Studio.

  1. Let's assume we have a database already created called, BoomiSupport.
  2. Create a new table called, CSVDemo:
    CREATE TABLE [dbo].[CSVDemo](
    [Id] [int] NOT NULL,
    [Descr] [varchar](50) NOT NULL
    [Col3] [varchar](50) NOT NULL
    [Col4] [varchar](50) NOT NULL
    ) ON [PRIMARY]
  3. Add some data to your table:
    Id    Descr            Col3        Col4
    1    The first row        Col3 data    Col4 data
    2    The second row        Col3 data    Col4 data
    3    Another row        Col3 data    Col4 data
    4    The final row        Col3 data    Col4 data
  4. Execute a simple query:
    SELECT Id, Desc FROM CSVDemo WHERE Id IN (1, 3)

    And you should get these results:

    Id    Descr
    1    The first row
    3    Another row
  5. Since SQL has no concept of Lists, or array or other useful data structures - it only knows about tables (and table based information) so it converts the string list into a table structure when it compiles the command - and it can't compile a variable string.  What we have to do is convert the comma separated values into a table first.  We will create a user function to do this and make this more general purpose.
    Open a New Query window and copy/paste the following code into it:
    /****** Object:  UserDefinedFunction [dbo].[CSVToTable]    Script Date: 04/28/2013 10:45:17 ******/


       (id int not null)
       --(id varchar(500) not null)
        ;-- Ensure input ends with comma
        SET @InStr = REPLACE(@InStr + ',', ',,', ',')
    WHILE PATINDEX('%,%', @INSTR ) <> 0
       SELECT  @SP = PATINDEX('%,%',@INSTR)
       SELECT  @VALUE = LEFT(@INSTR , @SP - 1)
       SELECT  @INSTR = STUFF(@INSTR, 1, @SP, '')
       INSERT INTO @TempTab(id) VALUES (@VALUE)

  6. You will notice that the only change in code that was made from the original function was adding another commented line below the id variable to allow for use of a varchar instead of an int (can be changed based on your testing needs).  Go ahead and execute this sql and you should now have created the function to be called from our stored procedure we will create next.
  7. Now we are going to create the stored procedure, getTestRecord, that will be called from Boomi. 
    Open a New Query window and copy/paste the following code into it:
    /****** Object:  StoredProcedure [dbo].[getTestRecord]    Script Date: 01/28/2015 09:08:30 ******/

    /****** Author: Boomi Support ******/

    -- Add the name of your database here

    USE [BoomiSupport]


    CREATE PROCEDURE [dbo].[getTestRecord]


        -- Add the parameters for the stored procedure here

         @LIST VARCHAR(500)




        -- SET NOCOUNT ON added to prevent extra result sets from

        -- interfering with SELECT statements.



        -- PRINT contents of parameter to Messages tab - for debugging only

        PRINT '@LIST=' + @LIST + '...'

        -- Declare the sql string with the IN clause using the @LIST parameter

        DECLARE @sql_str nvarchar(4000)

        -- Set the sql string to return all columns from table CSVDemo(your table name here) using the

        -- Id column(your column here) using an IN clause that calls your predefined conversion function

        -- that is passed the @LIST parameter

        SET @sql_str='SELECT * FROM CSVDemo WHERE Id IN (SELECT * FROM dbo.CSVToTable(' + @LIST + '))'

        -- PRINT contents of parameter to Messages tab - for debugging only

        PRINT 'sql_str=' + @sql_str + '...'

        -- Execute the sql string

        EXEC sp_executesql @sql_str



  8. Once the stored procedure has been created successfully, you can test it by opening another New Query window and enter the following:
    EXECUTE getTestRecord "'1,5,10'"
    Your Results and Messages tabs should show you The first row and correct sql used.
    Note: The following steps were done in Boomi.
  9. Create a new process with the following shapes connected in order: No Data Start Shape, Database Connector shape, Map shape, Message shape, and a Stop shape.
  10. For the Database Connector, the Action will be Get and select the Connection as an existing connection you have to your SQL Server that has the BoomiSupport db in it. Create a new Database Operation, and create a new profile  In the profile, select Statement and under the Type: option select Stored Procedure Read.  For the Stored Procedure: option, enter getTestRecord. Right-click on the Fields branch on the left, and Add Field for each one of your table columns: Id, Descr, Col3, Col4.  Right-Click on Parameters on the left, and Add Parameter for just one called, @List, for the Parameter Name: option. Save and Close your Database Profile. Save and Close your Database Operation.  Before closing the Connector Action window, click on the Parameters tab. For the Input: option, click on the magnifying glass icon and select @List. For the Type: option, leave it as Static.  For the Static Value: option, enter the following: '1,5,10'. You will notice this is slightly different that what was entered in our execute sql query (EXECUTE getTestRecord "'1,5,10'"). Click the OK button to close the Connector Action window.
  11. Optional Step - Now let's configure the next shape, Map. Create a new map with the source destination being the same db profile you created earlier. Check to make sure you see the same list of Fields and the @List under Parameters. Then for the destination profile, create an output xml profile with the same number and name of elements as your sql table:  Id, Descr, Col3, Col4.  Connect the source Fields to each corresponding destination element. Save and Close your new Map.
  12. Optional Step - Now for the last shape, you can configure your Message shape with a Message: {1} and Parameters as Current Data to see the output of the mapping.
  13. Save your process and test it against one of your local atoms. After the process executes successfully, look at the process log to see that the call to our stored proc getTestRecord(?) completed. If you created the optional Message shape in step 12, also look at the contents of the Message shape to see the xml output of the record retrieved from the database.


How can I insert file content or character data into BLOB data type field in my Oracle database?

For any database BLOB data type is used to store images or videos.  It is recommended to use CLOB data type for file content or character data insert in Oracle database. As a work around, use Custom Scripting to convert the file content or character data into Base64Encode format prior to mapping into the BLOB data type field. When reading that BLOB data from the database, you will then need to Base64Decode the data prior to using it.



19 people found this helpful