AI4EOSC Introduces Batch Mode Training: Boosting Efficiency and GPU Access for All
We’re thrilled to announce that AI4EOSC now supports batch mode training, a new feature available in the AI4EOSC Dashboard.

What is batch mode training?
Until now, users that wanted to train a model using specialized hardware resources had to create a deployment with a dedicated GPU. However, GPUs are scarce and allocating them permanently to users meant that they were under-utilized, because they stood idle most of the time.
In the new batch mode training, users will be able to execute their model training by sending jobs to a queue. The training will execute and, as soon as it is completed, the GPU resources are automatically released and ready to be used for other trainings, optimizing hardware usage.
How will you benefit from the batch training?
With batch mode training, the AI4EOSC platform resources will be used much more efficiently. Therefore GPUs availability will increase and more users will have access to them when needed. In addition, to promote batch mode among users, we set aside V100 nodes exclusively dedicated to batch training!
As batch mode will be deployed alongside current persistent mode, the AI4EOSC platform will be able to cover a wider array of user workflows, from quick tests to long running experiments
Get Started Now!
To explore this new feature, researchers can visit the AI4EOSC Dashboard, as well as read the relevant documentation:
More news
Join Us on May 9 to Discover Real-World AI Solutions Across Agriculture, Urban Analysis, and Earth Observation Artificial Intelligence is…
We are thrilled to announce that we have successfully deployed our first transatlantic testbed for AI inference based on the…
We’re thrilled to announce that NVIDIA FLARE (Federated Learning Application Runtime Environment) is now integrated with the AI4EOSC platform and…