Salesforce Integration Guide

Document created by chris_stevens Employee on Mar 21, 2016Last modified by Adam Arrowsmith on Aug 15, 2017
Version 13Show Document
  • View in full screen mode

The Salesforce connector connects seamlessly to any native Salesforce application, including Sales Cloud and Service Cloud as well any Custom Cloud or Force.com. As a Salesforce user, you can use the Salesforce connector to integrate with any other cloud or on-premise application.

 

 

User Guide Articles

Here are some links to our User Guide, which you may find useful when using and configuring the Salesforce Connector.

 

Process Library Examples

 

Scenarios on How to Use the Salesforce Connector

 

Scenario 1 - Querying Records

When querying records from Salesforce, each record found will be returned as a separate Document and be processed independently. By default, the Query action ignores deleted or archived records. If you would like to return records in these states, check the "Include Deleted" option in the Operation. For users familiar with the Salesforce API, this is the queryAll call.

 

Using the Like - Filter Operator:

There are cases when performing a Salesforce query where you would like to select approximate matches for a filter parameter (fuzzy search). For example, you could have many Account records in your organization that have the term 'ABC' in the Company Name. In a standard query case, you would use the 'Equal To' operator to match the Name to a static value.

 

Equal To - Example:

Query: Select NAME from ACCOUNT where NAME = 'ABC'

Results: ABC

 

This straightforward query may not yield any results, so the LIKE operator paired with your Expression may be the best strategy. In order to implement a LIKE query, you must use the % character on either side of the Static parameter value; similar to standard SQL syntax.

 

Like - Example:

Query: Select NAME from ACCOUNT where NAME LIKE '%ABC%'

Parameter Definition: %ABC%

Results: ABC, ABC Company, ABC Industries, Company ABC, ABC Corp.

 

Scenario 2 - Updating Records

Updating a specific record in Salesforce requires that you pass in the internal Salesforce ID for that record in the update request. This value is typically an 18-character alphanumeric value that looks like this: 0015000000MJtnHAAT. If this value doesn't exist in the source data you will need to look it up from Salesforce.

 

Note that this ID is slightly different than the 15-character ID you may see in the Salesforce UI or in the browser address bar.

 

As a best practice, if the other application has a field that can be used to capture an external ID, populate it with the Salesforce ID so you don't have to do a lookup to get the ID in your Process.

 

To do this, in the Map that maps from the source Profile to the Salesforce Update Profile, use a Map Function that performs a Connector Call to Salesforce. The Connector Call's Operation should do a Query action against the particular object type. Add a Filter to the Operation that you can pass in the key value(s) from the source data as an Input Parameter to limit the results to a single record. The Map Function should return the object's Id field as an Output Parameter. Map this Output Parameter to the Id Element in the destination Profile.

 

If a particular record does not already exist in Salesforce, the Id Element in the Update Profile will be empty after the Map. If a request without an Id is then sent to the Salesforce Connector, it will throw an error. If this is a possibility in your integration scenario, you should use a Decision Step after the Map to check that the Id is populated in each record before sending the data to the Salesforce Connector.

 

Scenario 3 - Upserting Records

The Upsert capability of the Salesforce API is a convenient way to do common "insert-new-or-update-existing" integrations. Instead of having to do a lookup against Salesforce to determine if a given record exists and then perform separate insert or update mappings and calls accordingly, you can simply perform one map to the Upsert request and let Salesforce determine whether it needs to do an insert or update.

 

Upserts can be used with standard or custom objects that have a custom "External ID" field configured. This External ID field should represent the primary key of the source system or some other uniquely identifying value. This will be helpful when integrating in the opposite direction. If you don't currently have an External ID field, it is recommended you identify one or create a new custom field to use specifically use for the integration. The Import Wizard within the Salesforce Operation retrieves the list of designated External ID fields for each object type for you to choose. You must select an External ID for the object.

 

If there isn't a single field in the source data to use as an External ID, see if you can uniquely identify a record by the combination of two or more fields. If so, use a Map Function to concatenate them together and map to the External ID field.

 

Depending on an object's relationships as defined in Salesforce, you may have the ability to upsert or at least reference related records by their External IDs as well. For example, you can upsert an Opportunity based on an Opportunity's External ID (e.g. "Order Number") and associate it with an Account record by specifying the Account's External ID (e.g. "Legacy Customer Number").

 

To enable this, in the Operation configuration select the object and then select a Reference Field. Check to box to "Use External Id for Reference" and select the appropriate Object Type and Ref External Id. Then in your map, you can map some value from your source data and avoid making a Connector call to retrieve the Salesforce internal ID.

 

Not all objects support the Upsert action. Refer to the Salesforce API Guide for a complete list of supported actions per object.

 

Using Reference Fields:

Reference fields in Salesforce refer to the object fields that can be used to attach a record to a parent or associated object. These reference fields are generally available in the Send Request XML by their name (Ex. AccountId, OwnerId, etc). By default, the request input expects the object's internal ID; however, this ID may not be readily available based on the source data. The Upsert action allows you specify other External ID fields to use in place of the default internal ID. This can save you the need of performing a Salesforce query (Connector Call lookup) to find the internalID based on another value such as Name from your source data.

 

Scenario 4 - Deleting Records

A few things to keep in mind when deleting records.

 

You must always supply the Salesforce internal Id value. External IDs can't be used.

 

Known Issue (BOOMI-8454: Salesforce Connector throws an exception after successful delete when multiple IDs are used): When wishing to delete multiple records, the request document sent to the Salesforce connector must contain only one Id per document. (This must be done even though the "Id" element in the request profile is configured as Max Occurs=Unbounded by default.) If multiple Ids are present in a single document, the records will actually be deleted in Salesforce, however the connector will error with the "Number of results does not match number of SObjects" message when processing the response. To avoid this, use a Data Process shape to split the request data. This can be done before the Map shape to the Salesforce delete profile, or afterward (splitting the Delete profile XML on the OBJECT/DeleteIds/Id element.

  • This applies whether the operation is configured to Use Bulk API or not.
  • Note: Behind the scenes the Salesforce connector DOES batch the documents in the actual request XML sent to Salesforce. It does not make a separate for each document.

 

Scenario 5 - High Volume API Best Practices

When developing Processes that send/receive high volumes of data to and from Salesforce, refer to these design considerations to ensure that you are optimizing processing efficiency and not exceeding Salesforce API usage limits.

  • Refer to the Salesforce - Web Services API Developer's Guide for API usage limits for your Salesforce edition
    • Salesforce Developers - Section: Using the API with Salesforce Features > Implementation Considerations > Monitoring API Traffic
  • Install and deploy Processes to a Local Atom and consider increasing Atom Memory
  • Review AtomSphere - High Volume Troubleshooting documentation for general data concerns

 

Get Actions:

  • Use Filters - This will limit the number of inbound records based on key field criteria
  • Query multiple objects in one request - In the Query Operation Import Wizard, select associated Parent and/or Child objects (eg. Contact - Parent: Account) if you need to reference this data in your mappings. This will prevent the need for Connector Calls (API web service requests) later in the Process for each individual contact record.
  • Query by Last Modified Date - If you are building a synchronization Process that is continually passing updates, add a LastModifiedDate filter for the object in the Operation so you can set a time window in which you would like to grab modified data. This will prevent the need to run entire object data sets through for each Process execution.
  • Set a Query Limit - Maximize the number of records you would like returned in one request by setting this number in the Operation. You can then update the retrieved records in Salesforce (eg. Contact > Custom Retrieved_Status field) at the end of your Process flow and prevent the same records from being re-processed in the future by filtering on the custom field.

 

Send Actions:

  • Configure the Batch Count - The outbound Salesforce connector operation is automatically configured to batch the XML requests into 1 group of 200 documents. This is the maximum amount allowed in one call by the SFDC API. This will help considerably with API usage statistics.
  • Limit the inbound data - Depending on your source for the outbound Salesforce integration, you will want to consider minimizing the number of records processed during a single execution. In a Database integration for example, you should consider only querying a maximum of 1,000 - 10,000 records at a time.
  • Use the Upsert Action - The Upsert action will perform the logic to either insert or update a Salesforce record automatically. This will prevent the need for: 1) A preliminary Salesforce lookup to see if the record exists. 2) A unique Salesforce Insert operation.3) A unique Salesforce Update operation.

 

Scenario 6 - Understanding Custom Fields and Objects

One of the great strengths of Salesforce is the ability to easily create custom fields and objects. Because of this, the Boomi Salesforce Connector connects to your Salesforce organization and browses the available interfaces in real time. Custom objects and fields are handled no differently than standard objects and fields. Custom objects/fields can be identified by the __c suffix. If you don't see your custom object or field when importing an object, make sure the security permissions for that object/field allow at least read-only access to the security profile assigned to the user name you're using to connect. If you modify an object in Salesforce after importing it in Boomi, you will need to edit the Operation Component and go through the import wizard again to re-import the recent changes.

 

Scenario 7 - Understanding the Salesforce Date Format

Salesforce Date/Time XML Profile elements should be configured with the following format: yyyy-MM-dd'T'HH:mm:ssZ

 

Scenario 8 - How to use the OR logical operator between two objects in SalesForce

Question:

It appears that the AND logical operator works between two objects, ie, parent to grandchild, but the OR logical operator does not seem to work. Is it available? How does it work? Are there any constraints/limitations?

 

Answer:

The SOQL query is automatically generated based on the configuration of the filters.

Currently the filters do not allow expressions fields from different objects within the same logical sub-group.

 

As a workaround, you may need to implement two separate connectors, one that reads from the parent object and then another that reads from the other object. Then use a Decision shape to filter the data.

 

Based on salesforce documentation :

http://www.salesforce.com/us/developer/docs/api/Content/sforce_api_calls_soql_relationships.htm#i1422304,

 

It says "Any query (including subqueries) can include a WHERE clause, which applies to the object in the FROM clause of the current query. These clauses can filter on any object in the current scope (reachable from the root element of the query), via the parent relationships."

 

Scenario 9 - How to Retrieve Attachments in Salesforce (Full Example)

Please visit the following link to review this scenario:How to Retrieve Attachments in Salesforce (Full Example)

 

Scenario 10 - How to use the Batch Results Option on SFDC Query Operations

Please visit the following link to review this scenario: How to use the Batch Results Option on SFDC Query Operations

 

Common Errors

 

Error: Duplicate External ID Error When Attempting to UPSERT Records to Salesforce

Description:

When attempting to perform an upsert into Salesforce the following error occurs: 'OBJ: Account - Duplicate external id specified: sw inc'

 

Solution:

Process is attempting to update/insert (Upsert) records into Salesforce that have the same external ID

 

This is a Salesforce restriction.  Salesforce will not allow multiple records with the same External ID to be processed in the same batch.  By default, the Batch Count set within AtomSphere for a SFDC Upsert Operation is 200.  In order to get around this Salesforce restriction, set the Batch Count to 1 for the Upsert Operation and Salesforce will process each record individually.

 

 

Error: Error executing Salesforce send: Number of results does not match number of SObjects; Caused by: Number of results does not match number of SObjects

This is related to the Salesforce Operation's outbound actions.

Each Salesforce operation (Create, Update, Upsert, Delete and Merge) is configured differently, specific to the operation.

 

For example:

The Salesforce Delete action deletes an existing record in the Salesforce object defined in the Delete operation. You must supply the internal "ID" field in the request to delete the existing object record. If this internal ID is not readily available in your source data, consider using a Connector Call function to Query data based on a standard value such as Name.

 

For a Create action, it creates a new record in the Salesforce object defined in the Create operation. The internal "ID" field is generated automatically per each document sent to the operation.

 

For more details please see Salesforce operation.

 

Error: ERROR: sf:SERVER_UNAVAILABLE --------------- SERVER_UNAVAILABLE: Too many requests waiting for connections;

This message is coming from Salesforce. The Salesforce server is unable to secure a connection. If this error is consistent or if the error occurs persistently occurs over a period of time it it recommended to create a support case with Salesforce to make them aware of your issue and so they can investigate and troubleshoot this issue.

 

This link is from the Salesforce help site and provides some additional information for this issue.

Why am I getting a SERVER_UNAVAILABLE error message when sending a request to Salesforce?

 

Error: SF_ERROR: ENTITY_IS_DELETED OBJ: <ObjectName>

This is a SalesForce generated error message.  It does not mean that the object, (in this example), "Contact", is deleted, but instead indicates that a record in the object that you are attempting to update is deleted.

 

Error: Caused by: com.boomi.model.connector.common.salesforce.SalesforceException: Error executing query. Error message received from Webservice. ERROR: sf:QUERY_TIMEOUT

This error is passed on from Salesforce and is based on retrieving data within Salesforce. Salesforce imposes a 2 minute timeout on queries.  Depending on the query elements, and the way that the Salesforce records are stored, a more specific query may take longer than a more generic one.  For example, querying on active elements, which have been modified since the last successful run date, may time out.  But just querying on elements modified since the last run date, may not.

 

Queries do not need to complete in 2 minutes.  Queries may complete successfully after running for many hours.  The two minute timeout is handled internal to Salesforce.

 

It is important to note, that this timeout is different than the HTTP timeout referenced by other documentation.  This is not something that is controllable via the atom configuration.

 

Please note: You may want to check the website trust.salesforce.com to check system status for your instance.

 

It may be necessary to modify the query parameters, to choose elements that allow Salesforce to process records more quickly.  You may even loosen the query parameters, and then filter them downstream from within AtomSphere.

 

Error: [Function: SFDC Lookup PBE by ItemRef, Filter out "Note" lines (QB), Connector Call (Step 3)]: Error received executing Salesforce Get (com.boomi.process.ProcessException)

Full Error:

Error received executing Salesforce Get (com.boomi.process.ProcessException)

Caused by: [Function: SFDC Lookup PBE by ItemRef, Filter out "Note" lines (QB), Connector Call (Step 3)]:

Error received executing Salesforce Get ((com.boomi.transform.TransformException))

...

Caused by: Error executing Salesforce query ((com.boomi.model.connector.common.salesforce.SalesforceException))

Caused by: MALFORMED_QUERY: Pricebook2Id = '01s5000000068TtAAI' AND ) ^ ERROR at Row:1:Column:191 unexpected token: ')' ((com.boomi.connector.common.salesforce.v14.service.MalformedQueryFault_Exception))

 

Solution:

This error usually corrects/resolves itself. Please check subsequent executions to make sure the query works as expected.

 

Error: Error received executing Salesforce Get; ...Caused by: java.net.SocketTimeoutException: Read timed out

A Read timed out error occurs, after the Connector has waited for a certain amount of time for data from Salesforce, and has not received a "keep alive" message.  This can occur when there is a network interruption, and many times may be temporary.

 

The connection time out may also be due to querying large amounts of data.  One or more of the following suggestions may help eliminate this issue:

  • In the Query Operation Import Wizard, it is possible to query multiple objects in one request. If you need to reference both a parent and a child object in a map, select the associated Parent and Child objects. This will save connector calls later in the process.
  • Limit the number of records you would like returned in one request by setting a Query Limit in the Operation. You can then update a custom field in Salesforce at the very end of the process to prevent reprocessing of the same record and also filtering on this custom field.
  • Using filters in the Operation to limit the number of inbound records based on key field criteria.
  • If your process is passing updates to another media (such as an internal database), add a LastModifiedDate filter for the object in the operation so you can set a time frame for getting the modified data. This will prevent all of the data in the object being selected each time the process executes.
  • Work with your network administrator to determine if there is additional error information in network log files.

 

Error: Salesforce Error MALFORMED_QUERY: <Field Name> = <value> ) where Field Name is defined as a Number

Description:

You receive this error when performing a select which compares a passed in value, to a Salesforce field which is defined as a number.

 

For example:

MALFORMED_QUERY: Inttra_ID_and_Carrier_ID__c = 801450800002) ^ ERROR at Row:1:Column:65For input string: "801450800002"

 

In this case, the field/element "Inttra_ID_and_Carrier_ID__c" is defined in Salesforce as a number.

 

Solution:

Salesforce is expecting the input to be a double and they appear to be checking this by looking for a decimal. If you set the input to 801450800002.0, it will work. In this case I was able to create a user-defined function in a map that appended ".0" to the inbound value and then used the new value to perform a look up.  You could do the append function, and then set a document property, and then use the document property further downstream if applicable.

 

This issue is referenced in the Salesforce developers forums https://developer.salesforce.com/forums?id=906F00000008rBtIAI

 

Error: Error attempting to browse Salesforce operations. Error executing login. Error message received from Webservice. ERROR: soapenv:Client --------------- No operation available for request {urn:partner.soap.sforce.com}login

Description:

You get the error above when using the Salesforce Connector to connect to your SF sandbox.

 

Solution:

Verify you are using the correct username and password, including the correct token as described above.

Verify that the account is not locked, and that the password is not expired.

If the connection is still failing, then try connecting to an older version of the sandbox, by changing the version number in your URL.

 

For example, change:

https://test.salesforce.com/services/Soap/c/31.0

to

https://test.salesforce.com/services/Soap/u/14.0

 

Error: Error received executing Salesforce Get; Caused by: HTTP transport error: java.net.ConnectException: Connection timed out: connect; Caused by: Connection timed out: connect

This is a generic error that occurs due to the data load fetched by the Salesforce connector, and could be caused by any of the following:

 

  • The timeout could be due to querying large volumes of data in one pass.
    • Use Filters - This will limit the number of inbound records based on key field criteria and may help eliminate this error
    • Query multiple objects in one request - In the Query Operation Import Wizard, select associated Parent and/or Child objects (eg. Contact - Parent: Account) if you need to reference this data in your mappings. This will prevent the need for Connector Calls (API web service requests) later in the Process for each individual contact record.
    • Query by Last Modified Date - If you are building a synchronization Process that is continually passing updates, add a LastModifiedDate filter for the object in the Operation so you can set a time window in which you would like to grab modified data. This will prevent the need to run entire object data sets through for each Process execution.
    • Set a Query Limit - Maximize the number of records you would like returned in one request by setting this number in the Operation. You can then update the retrieved records in Salesforce (eg. Contact > Custom Retrieved_Status field) at the end of your Process flow and prevent the same records from being re-processed in the future by filtering on the custom field.
  • You may need to recreate a new Salesforce connection with security token reset.
    • Try to  re-import Operations and check if it is still able to establish a connection successfully.
    • Retest the process using the same connection in the test atom cloud and interpret the results.
  • The network issue could be caused by changes in proxy configuration:
    • The proxy settings information is available in the atom's container log under container.properties file. A proxy or network/firewall setting could be preventing the communication from successfully access SFDC. If these settings do not look correct, please reset them, then stop and restart the Atom.
  • You may be able to determine more about the timeout error, by examining network traffic using a tool such as Charles Proxy.

 

Error: ERROR at Row:n:Column:n value of filter criterion for field <Field Name> must be of type double and should not be enclosed in quotes

Description:

You receive the following error when attempting to pass a null value in a parameter to be used as a filter for an element identified as a Data Type "Double" in Salesforce.

 

Solution:

In this type of situation, you must use the word NULL or null in the static parameter value field.  No quotation marks.

 

Error: ERROR at Row:1:Column:1862 No such column 'YTD_Dist_Sales__c' on entity 'Account'. If you are attempting to use a custom field, be sure to append the '__c' after the custom field name. Please reference your WSDL or the describe call for the appropriate names

Description:

After removing a Custom field from Salesforce, now the operation failing with the following error, (or similar error)

 

Solution:

The quickest way to correct this issue is to go into the Salesforce operation, and locate the the element in the object, un-check the box next to the element that you deleted, and save the operation.  Then save the process, and re-deploy.

 

Error: CANNOT_INSERT_UPDATE_ACTIVATE_ENTITY

This error is coming from Salesforce. Immediately after "CANNOT_INSERT_UPDATE_ACTIVATE_ENTITY" is a reference to a specific object following "OBJ:", i.e.  Purchase Order.

Please research the configuration of the object mentioned within Salesforce.

 

Error: SF_ERROR: INVALID_ID_FIELD OBJ: ContentVersion - You must specify a FirstPublishLocation where you have publish permission

There are two things to check in Salesforce in order to resolve this. The first is to make sure that your Salesforce user has permission to upload documents to the workspace being used. The second thing to check is that the workspace ID matches the one your Salesforce user is set up to use.

 

Error: No such column 'Total__Balance_Pre__c' on entity 'Account'. If you are attempting to use a custom field, be sure to append the '__c' after the custom field name. Please reference your WSDL or the describe call for the appropriate names.

Description:

You may receive this, or a similar error, when trying to execute a process after adding new fields to Salesforce:

 

Solution:

The error indicates that there is a difference between the fields in the Salesforce object and what is being used in the Operation in the Boomi Process.

 

  • Open the Salesforce Operation > Import Wizard and go through the steps to redefine the Salesforce objects you are connecting to and update the XML response profile with the most recent field changes
  • After generating the new XML profile, verify that the Filters are still set up correctly on the Operation
  • Save the Operation and Process, then re-deploy to start using the updated profile

 

FAQ

How do I generate a security token in Salesforce developer account?

In order to access Salesforce via Dell Boomi AtomSphere, you must replace your current password with a combination of your password and a security token like this:

     MypasswordMytoken

 

If and when you change your password, you will need to request a new security token.

 

To request a security token:

  1. Log into your Salesforce account as the user for whom you wish to reset the token.

  2. View your User Profile > Settings > My Personal Information, Reset My Security Token. Click Reset Security Token button to triggers an email containing the new security token.

  3. Carefully copy and paste (do not copy any trailing spaces), append to your Salesforce password, and paste into the Salesforce connection's Password field.

 

Is there a way to monitor the number of API calls that are made while accessing an object from Salesforce?

Per the Salesforce API guidance (which is accessible via link from the Boomi guidance on Salesforce connector), the Salesforce query result object contains up to 500 rows of data by default. If the query results exceed 500 rows, then the client application uses the queryMore() call and a server-side cursor to retrieve additional rows in 500-row chunks. You can increase the default size up to 2,000 in the QueryOptions header, as described in Changing the Batch Size in Queries. For more guidance, you can refer to the Salesforce API Guide.

 

The Boomi guidance at the following link shows the standard Get fields which include a field for batching results and setting the batch count:

http://help.boomi.com/display/BOD/Salesforce+Operation#SalesforceOperation-QUERY

 

So, if you set the batch count higher, you can figure out how many API calls will be made, by dividing the number of records (e.g. 1.6 million) by the batch count setting.

 

Also, refer to the High Volume API Best Practices at the following link to ensure the best design is implemented for your high volume query:

http://help.boomi.com/display/BOD/Salesforce+Integration#SalesforceIntegration-HighVolumeAPIBestPractices

 

It includes a link to the SF developer guide for how to monitor API traffic: Developer Guide - Section: Using the API with Salesforce Features > Implementation Considerations > Monitoring API Traffic ... here is their link:

http://www.salesforce.com/us/developer/docs/api/index.htm

 

The import did not allow for the selection of the child object.  Can a SFDC upsert can be performed on a parent and child objects in the same Salesforce connector operation?

Per Salesforce’s API, “you can process records for one more than object type in an create() or update() call, but all records must have the same object type in an upsert() call.”

 

Therefore a separate upsert operation must be implemented for each object. Therefore, it’s not possible to do an upsert on 2 different objects (even if they are parent-child) in the same connector operation.

 

http://www.salesforce.com/us/developer/docs/api/Content/sforce_api_calls_upsert.htm

 

What is the filterValue that I should specify to fetch the records whose EndTime is Not Null in Salesforce?

In the Query filter section of Object Event, create a filter criteria similar to:

Filter Name: EndTime

Field : EndDateTime

Operator : Not Equal To

 

Then in the Connector Parameter section, specified the following:

Input : EndTime

Type : Static

Static Value : 'null'

 

Why am I Unable to see Action or Object picklist after connecting to SF sandbox?

Change the connection URL from:

https://test.salesforce.com/services/Soap/u/31.0

to a previous version:

https://test.salesforce.com/services/Soap/u/30.0

 

Save the connection, re-import or re-create the object, until you see the Action or Object in the pick list.

 

Why can I not see a particular field that I know exists in Salesforce?

Verify that the user configured in the connector has the appropriate permission to see that field.

 

Also, please check what version of the Salesforce API you are using. Is there a more current version (24.0 vs. 18.0)?

If so, modify the URL in the Salesforce  connection with the new API version. For example: https://www.salesforce.com/services/Soap/u/24.0

After saving this change and re-importing the profile, check the object for the necessary field.

 

How do I sync the process when a new custom field is added in Salesforce?

  1. Open the Salesforce operation component
  2. Use the Import button to open Salesforce Import wizard.The wizard will guide you on how to complete the import asking for connection,Salesforce object,fields etc.
  3. Save the operation component and re-save the process.Finally redeploy the process for the changes to take effect.

 

How do I call a Salesforce Apex webservice class?

To call a custom Apex class exposed as a web service, use the Apex Connector instead of the Salesforce Connector.

 

How do I send/load data from Salesforce using the Bulk API?

There are two options that you need to look at:

 

  1. Use Bulk API – to tell the connector to load data to Salesforce using Bulk API
  2. Batch Count – the maximum number of records in a batch

 

To have your data upload with Bulk API, you just need to check on the “Use Bulk API” checkbox and specify the Batch Count (the default value is 200) in the operation configuration screen. The Dell Boomi Salesforce connector will handle everything for you automatically at the backend including preparing the data into batch according to the Batch Count.

As there is no option to turn on serial model, some options to address lock contention in Salesforce could be:

  • Reduce batch sizes
  • Use flow control

 

Please note that the Salesforce connector only supports the BULK API option in parallel mode.

 

How do I Identify the Salesforce record processed within Boomi?

In order to identify the SF record processed within Boomi, you may have to manually search for the record/data from the XML profile returned by the Salesforce connector. You will need to use the record ID, or some other field in Salesforce that uniquely identifies the record.

 

Alternatively, if you configure the connector's "Tracked Fields", then you would be able to see the recordID in the process reporting data. For reference:

http://help.boomi.com/atomsphere/GUID-103F06E6-94BF-472A-9C50-F3780CE5B497.html

http://help.boomi.com/atomsphere/GUID-C84D1FEF-BD90-46CE-BFD2-33CE720572EE.html

 

How do I construct complex SOQL using the Salesforce REST API?

Salesforce REST API is needed to be used in order to execute a complex SOQL.

 

Use HTTP Client with GET action in order to use REST API to login to Salesforce. Connection URL should look something similar to this: https://cs18.salesforce.com/services/Soap/u/34.0

 

You should pass SOAPAction: login in Request Headers.

 

The request profile should be similar to this:

<?xml version="1.0" encoding="utf-8" ?>
<env:Envelope xmlns:xsd="
http://www.w3.org/2001/XMLSchema
"
    xmlns:xsi="
http://www.w3.org/2001/XMLSchema-instance
"
    xmlns:env="
http://schemas.xmlsoap.org/soap/envelope/
"
>
   

<env:Body>     
<n1:login xmlns:n1="urn:partner.soap.sforce.com">         
<n1:username>{1}</n1:username>         
<n1:password>{2}{3}</n1:password>
</n1:login>
</env:Body>
</env:Envelope>

 

The Login Response profile will return a sessionID which will be used as Authorization value in the subsequent calls.

 

Build your SOQL and pass it as a parameter into an HTTP Client Connector:

 

The process should look similar to this:

 

How do I connect to Salesforce REST API with OAuth 2.0?

Please see How to Connect to Salesforce REST API with OAuth 2.0.

17 people found this helpful

Attachments

    Outcomes