- Newest
- Most votes
- Most comments
Hi Changsoo
Hyperparameters directly impact how the model is updated, they control the settings of the optimization algorithm that is used to "solve" for the model that gives the maximum expected cumulative return. Changing hyperparameters can improve the convergence of the model, or worsen it. For example, if you increase the learning rate, the weights in your neural network will update with larger increments. The model may improve (train) faster but the risk is that you miss the optimal solution, or the model never converges as updates are too large. Finding good hyperparameters often required trying a number of different combinations and then evaluating the performance of the model vs time spent training or some other metric. For example, I am busy training a 3m/s model (with 2 speed granularity) using a learning race of 0.001 and a low number of epochs. I can see during training at around 90 minutes my model is starting to do a lap now and then. If the learning rate was smaller, it would probably take longer for my model to complete a lap.
Note that at 3m/s my model will not be as fast as a converged 5m/s (or faster) model, but those will take a long time to converge. We increased the training speed in the console to a max of 8 m/s. Training at speeds faster than 8m/s tends to send the model spinning off the track.
Kind regards
De Clercq
Hi Changsoo
I did the following tests overnight to show impact of hyperparameters
Trained 4 models on the Kumo Torakku training, each for 180 minutes, using my own reward function that does some center line following, scales reward for driving fast etc.
I alternated
Model 1: 3 m/s 2 speed granularity with learning rate = 0.001 and epochs = 3
Model 2: 3 m/s 2 speed granularity with default hyperparameters
Model 3: 5 m/s 2 speed granularity with learning rate = 0.001 and epochs = 3
Model 4: 5 m/s 2 speed granularity with default hyperparameters
Doing 5 lap evaluation on Kumo Torakku training, showing lap completion percentages
Model 1: 100% 100% 100% 100% 100%
Model 2: 46% 67% 61% 100% 62%
Model 3: 70% 58% 100% 100% 100%
Model 4: 63% 88% 100% 36% 27%
This shows you the impact of playing with the hyperparameters.
Kind regards
De Clercq
I copied all these waypoints out from the kumo log if it helps anyone.
https://gist.github.com/joezen777/98daa6496acf6a6df3269f253f9388f9
Hi
To get the waypoints you can download the track's .npy file and use the code in the log-analysis workbook to extract them
Log-analysis is here
https://github.com/aws-samples/aws-deepracer-workshops/tree/master/log-analysis
Track npys are here
https://github.com/aws-samples/aws-deepracer-workshops/tree/master/log-analysis/tracks
See breadcentric's blog on how to use log-analysis (link is in the Pit Stop page)
https://aws.amazon.com/deepracer/racing-tips/
Kind regards
De Clercq
Hi @DeClercq-AWS,
I'm using your script, but i have a couple of questions:
1- Are Yaw and Steering on Degrees or Radians? It seems to be in Radians.
2- How can we include other parameters?
3- It seems Track Width is not returning the right value. How can we confirm?
4- Is Progress defined from 0-1 or 0-100? It seems it's based on 0-1, but the documentation says 0-100.
Thanks,
cladeira
Edited by: cladeira on Jun 23, 2019 2:06 PM
Hi,
The progress in the docs says its a float between 0-100? Or is it 0-1? :)
Regards.
JJ
Relevant content
- asked 5 years ago
- Accepted Answer
- Accepted Answerasked 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated a year ago