@kristina.dess to add up to @1Peter 's statement, I can confirm that there is no such thing as “reserving python or other execution environments upfront”. The resources (i.e. execution environment - or even more precise connection to the execution environment) are allocated as late as the python processor hitting its execute state. This does not depend on the execution context of the WF itself (Microservice, manual execution, PL execution, scheduled execution or isolation group chosen). Moreover, the timeout starts ticking after the script was sent to the execution environment controller. It does include the waiting time for the connection, though. This is partially a safeguard to avoid congestion of the system due to the lack of python resources.
If you have trouble with timeouts due to parallel running python processors, in fact isolation groups can come to the rescue (by queuing the jobs of WFs with python processors in them until the previous executions have finished.
Another option is, to have the limit of parallel python scripts running increased. Of course, this also means that the python execution environments have to be upgraded in resources to cope with the higher demand.
And yet another option is trying to avoid the use of externally executed scripts in the first place. I know that sometimes it seems more handy to have a python script doing the mundane tasks since one has better tools for the control flow. However, each python script execution is a change in data arrangement. To give the user the flexibility mentioned above, data will be (materialized and) collected to be supplied to the python script. This harms performance and produces IO overhead. Moreover, you have to take care of data arrangement, memory and parallelism. Things that usually Spark manages for you.