Perhaps the most simple way if the intermediate tables can’t stay (perhaps with an appropriate name?) would be to just use the same table for all workflows and tick “Take Schema from New”.
Then it would always overwrite the previous data in this table with the latest step. This has the disadvantage that if the final table is used for example in some App the App will be broken while the productionline runs (wrong column format / data etc)
Another solution is to have a final cleanup-workflow in the productionline that uses the API to delete the intermediate data tables. Since you will need to figure out the table id within the workflow I think you need to try fancy JSON parsing processor kung-fu or do it with a Python processor.