1 Resposta
- Mais recentes
- Mais votos
- Mais comentários
1
Here is documentation regarding the reward graph and how to interpret it.
In AWS DeepRacer, training happens in iterations. Each iteration is collection of <n> episodes, n is 20 by default for PPO, and is configurable and 1 for SAC, and is fixed. At the end of every iteration, latest model is saved as checkpoint for evaluation. Evaluation runs for 5 episodes (called Trials) and evaluation metrics (average completion percentage, average reward value) are saved. The current selection for criteria best model is at the maximum average completion percentage.
Hope this information is helpful for you.
respondido há um ano