While @sadok.ben-yahya is right concerning the Spark part of ONE DATA, there might be a possibility to offload GPU-based computing to a Python runtime.
One of the strengths of ONE DATA is the interoperability with other frameworks. By using a Python processor, you can use any GPU-based parallelization lib. Note, that to achieve this, GPU support inside the Python execution environment must be available and the respective library must be installed in the environment. Also, when leaving the realm of ONE DATA Core and its Spark-managed computing, memory management and other resource restrictions have to be handled by the python lib or your gluecode provided. When in doubt, friendly neighborhood DevOps can help.
Also note, that for moving data from a Spark-based computation towards a Python environment means some transmission overhead. Only if the task at hand is computational intense and benefits from GPU support, you should consider this option. Neural networks are a good example where this will probably pay off.