AnsweredAssumed Answered

Workarounds for Redshift statically defined COPY command in different environments

Question asked by james_m_hutton098090 on Jan 15, 2018

hi all

 

given the issue below, not being able to use variables in a Redshift COPY command, i would like to consider how we can deploy Redshift data load processes to different environments.

we are using local molecules.

https://community.boomi.com/community/buzz/blog/2016/09/08/getting-data-into-amazon-redshift#jive_content_id_Need_to_use… 

 

options considered so far:

  • use the same S3 bucket and file for all environments and try to prevent anyone running the same process at the same time for different environments
  • create a CLI script to call Redshift and call from a Program Command shape OS command. Either pass in the S3 bucket to the script or configure each Molecule differently to use a different S3 bucket
  • use some kind of 'process locking', so that each process sets a flag somewhere (e.g. a common S3 file) to indicate that the process has started, then removes the lock when it finishes. If another instance of the process (i.e. in a different environment) starts and finds a lock is current, it will stop with an exception

 

i don't really like any of these options, so any suggestions would be appreciated.

 

james

Outcomes