I have the following workflow setup, where in the first Extended Mathematical Operation, I access the current datetime with the sql function current_timestamp(). This gets populated as a column to the two tables connected to it.
Unfortunately, the timestamps in both tables differ by a few seconds. I think this is because Spark executes the current_timestamp() function twice for each branch of the tree. So how should I fix this?
Is this a good usecase for a caching processor? Or is there a better way to deal with this?
Thanks a lot,