Linux Hardening Best Practices

Document created by rich_patterson Employee on May 24, 2018Last modified by rich_patterson Employee on May 24, 2018
Version 2Show Document
  • View in full screen mode

Objective

This article will outline high level concepts for hardening Linux systems that will be used to host an AtomSphere Molecule.  Many of the same concepts apply to Atoms and Clouds.

 

Because security requirements and administration policies differ from Customer to Customer, this article is meant only as a guide.

 

The article will focus primarily on security concepts, but will also discuss system configurations designed to maximize stability and thereby reduce operational costs.

 

Network

The majority of the security concerns will be addressed by proper network, firewall, and security group configuration.

 

Internal Network Traffic

Clustering

Machines in the cluster will communicate with each other via Multicast by default.  If your network does not support Multicast ( many Virtual and Cloud environments do not ), then the cluster can be configured to use Unicast.  ( see Setting up Unicast Support  )

 

The Multicast port and address used, are configurable through Atom Management ( see "Multicast Address" and "Multicast Port" )

 

The Unicast port can be configured via Atom Management ( see  "TCP Port for Unicast" ).

 

The Initial Hosts for Unicast setting, does not need to include "ALL" of the nodes in your cluster, but it does need to include a set of machines "some of which" will always be online.  In this way, you can add and remove nodes without having to update this setting, as long as you make sure that the base machines are left online.

 

Cluster traffic, can be bound to a separate network adapter if required.  ( see "Cluster Network Bind Address" )

 

Cloud configurations may include Atom Workers, which also need to communicate across the internal network.  The port range used is also configurable.  ( see "Atom Worker Local Port Range" )

 

 

Network Storage

The node machines will need access to the Network Storage ( NFS ) server ( or hardware device ).

 

Integration Traffic

Nodes in the cluster must be able to reach any internal integration endpoints ( such as a on-prem database )

 

Outbound Traffic from Cluster Nodes

Each node in the cluster will need to be able to access (outbound) https://atom.boomi.com to report health, execution meta-data, and to receive deployment/extension/other updates.

 

The nodes will also need to be able to access https://software.cdn.boomi.com/ to download release updates.  ( See Hostnames and IP addresses for connecting with Dell Boomi for more information )

 

The nodes will need outbound access to any other SaaS/Cloud applications needed for business integration logic.

 

Inbound Traffic

Web services ( or AS2 ) hosted in your Boomi environment, can be exposed to your internal network, or through your firewall as required.  Clustered environments ( Molecules and Clouds ) typically sit behind a Load Balancer.

 

The Load Balancer should only expose the ports ( and SSL protocols ) necessary to support the integrations.  Typically we expose only port 443, and we configure the load balancers with a minimum TLS version.

 

The Boomi service on the nodes themselves, will typically listen on a higher port ( i.e. 9093 ), and the Load Balancer must forward traffic to the appropriate port.

 

Load Balancer can be configured to do health checks on the individual nodes ( using https://<node IP>/_admin/status )

 

OS

General

Operating System configuration will be based upon the IT best practices within each organization.  i.e. "Follow general "OPS" best practices"

 

Your business may consider the following:

 

Dedicated Machines

When possible, machines should be dedicated to running the AtomSphere application.  The preference should be to run a single Atom ( or cluster node ) per machine.  As a best practice, isolate production instances from any development or test resources.

 

There are some applications that require the Atom to be installed on the same hardware.

 

User Configuration

The AtomSphere application/service ( Atom, Molecule, Cloud ), should be installed and run under a "non admin" user and group.  We typically refer to this as the "boomi" user and "boomi" group.  The service should NOT run as root, and the install and working directories should be owned by this user and group.  This user should NOT be given terminal/direct login access.

 

This Boomi user and group should be dedicated to the AtomSphere product.

 

In a clustered environment, the uid/gid of the service user, must be consistent across the nodes.  When applicable, configure the user within your IdP / LDAP.  Otherwise, make sure the uid/gids match in /etc/passwd and /etc/group

 

Storage

Install Directory

When installing an Atom, it is recommended to install into a separate partition/drive from the Operating System.  This way, disk utilization issues, will not impact the OS.

 

Cloud and Molecule installs will utilize Network Storage.  As a best practice, these NFS drives should be dedicated to each specific cluster and not shared across applications ( for performance reasons ).

 

When choosing storage options, consider availability and backup capabilities.  Some devices allow for block level replication ( for backups ), that have minimal performance impact on the OS.  File level replication can have performance impact.  ( See Best Practices for Run Time High Availability and Disaster Recovery )

 

Performance Considerations

In a high volume clustered environment, consider increasing the default open file limit.  Reference:

Too Many Files Open Error 

Atom Cloud Installation Checklist (Linux) 

 

The AtomSphere services, tend to read and write many "small" files within the installation directory.  When possible, monitor your systems for IO and throughput concerns.

 

Working Directory

In clustered environments, you MUST configure a "local working directory."  This directory should be owned by the boomi user, and should be on a separate partition on the local machines ( to protect the OS if that drive were to fill ).  This can be configured during the install, or afterward through Atom Management.

 

Java Temp

The local "java temp" directory can be configured, but it is usually not necessary to do so.  By default, the system will use the default Java tmpdir.  In some systems however, if this value defaults to a network drive, it should be changed.  ( Network users in Windows environments, occasionally are configured to use a network mounted home directory )

 

Service Configuration

Creating a Linux systemd service to start the Atom 

Linux server systemd service unit definition to address rolling restart failures and server reboots due to dependencies 

 

Other Atom Management Best Practices

The following settings are only a subset of what is available via Atom Management.  Settings like these, allow you to control how the AtomSphere service will utilize system resources.

 

Purge History - Reduce to lower disk usage

 

Force Restart - MUST be set in clustered environments to help with rolling restarts and atom updates

 

Directory Levels ( Atom Data Dir Level and Process Execution Dir Level ) - Consider setting higher ( suggest 2 ).  This can improve performance by reducing the number of files in directory

 

Atom Pending Shutdown Delay - see Reference Guide


Maximum Forked Execution Time in Cloud - ( Clouds and Forked Molecules only ) Should be set ( suggest 1 day = 86400000 ms )


Purge Schedule For Temporary Data - can be reduced to match Max Forked Execution Time


Maximum Simultaneous Forked Executions per Node - ( Clouds and Forked Molecules only ) Should be set to avoid over-committing memory resources on nodes

2 people found this helpful

Attachments

    Outcomes