You’ve probably heard about the Go matches between Lee Sedol and DeepMind’s AlphaGo that the latter won 4-1. You’ve probably heard about deep learning and the hype around machines replacing humans. This post is going to be about how to use the principles of deep belief networks to quantitatively think about coaching and improving the performance of players.
Here’s some background information and a few disclaimers. I don’t like to call myself an expert due to the fact that I’m not an academic writing papers on the subject. I’m also only one of millions of people in this space that use machine learning and neural networks for thousands of different reasons. What I have been able to do is work in this space for quite some time and raised significant capital based off of my work in machine learning and predictive systems and their practical applications.
Disclaimer: This post is an oversimplification of many concepts for the purposes of discussion. We’re not going to delve into the details, we’re going to keep this very high level. I don’t want to introduce math but rather the concepts of training, feedback, overfitting, and how we can use them to improve our own play or that of others. These concepts already exist in coaching systems and certifications; if you think this post is going to teach you coach then that’s probably not going to work. This post may help you become a better player and to think about your behavior and that of those you’re coaching.
Once again: I’m leaving a heck of a lot of details out because the high level concepts are the important parts.
Ok, enough of that, let’s get on with this.
Neural networks, the type of machine learning we’re talking about, consists of a set of input neurons (input layer), a set of output neurons (output layer), and optionally 0 or more sets of intermediary neurons (hidden layers). In most networks inputs pass through a single hidden layer before making it to the output layer. If more than one hidden layer exists then it’s called deep learning. Very simply each of the layers, except the input layer, takes the input value, from the previous layer, and applies a weight to it.
The idea is that you put input variables in and through a process of training we adjust the weights for each layer to minimize the error between the predicted output and expected output. This process works for input layers that can be hundreds, thousands, or millions in size working through 30 or 40 neurons (this varies widely) in hidden layers outputting to a handful of output neurons.
This models, very simply, how our brains work. There’s still a ton of work to be done to do this better and in many cases the improvement gained through a neural network is not worth the cost of creating or operating them. Still, when they work well they’re excellent and beautiful in their simplicity.
Let’s consider how this applies to Ultimate. Let’s think of our decision making process simplified to what I’ve described above. What we’re seeing, hearing, feeling, our injuries, energy levels, the weather, sideline noise, field surface, etc. are the input variables. The hidden layers transform and weigh that input based upon our internal modeling. The output layer consists of the probabilities that each decision should be made.
Visualize the following: a vertical stack, you’ve got the disc, there’s a handler to the break side, and five players in the stack. It’s a very common play and that position can be considered as the input to your neural network. What we want from the output is the ideal decision: throw a break, wait to see if a deep cut is open, huck right away, upline handler cut dump, break side dump, etc. How do you decide what decision to make that maximizes the probability of a good outcome? Training.
General machine learning systems go through a phase of training with input data. Training develops the internal model of weights used between layers. There are a few different types of training methods: supervised, unsupervised, semi-supervised, and reinforcement. We’re going to focus on supervised learning as it can be easily understood and is most applicable.
In supervised learning we have many sets of input data and for each one we have a correct outcome. During training we pump the many sets of data into the input and the learning system compares performance to the expected output and then adjusts itself internally. Systems like this, typically, lack memory and are acting immediately off of the current data it sees.
How does this work in Ultimate? It’s called practice. When you run through a vertical stack play or really any other play or movement or throw again and again you have an ideal outcome. You take all the data available to you, process it, and then make a decision on that. If the outcome was not ideal (throwaway or interception) you have to go through a process of introspection.
That process of introspection is lacking in many players and amongst teammates. Nothing annoys me more than when a player makes a bad decision and calls of “good try” come in from the sideline. I’m not saying you must chastise the player but they must understand that the decision was not the correct one. Calls of “receiver error” are also common but the initiator of a throw must reflect upon whether the decision to make the throw or the type of throw or the flight path or timing etc.
Once you have reflected upon it you must make an adjustment, you must understand what you should have done differently. This updates your internal model which you should then test again. If you have a weak process of introspection then you shall not develop your internal model and you’ll keep making the same errors repeatedly.
Practices are a safe place to develop and reinforce your model. You can also do this at tournaments and I advocate that players attend as many tournaments as possible early in their career so they can gain as much experience as possible. That tournament experience is training that helps do develop your model.
This principle also applies to dynamic plays and flow. This is a type of reinforcement learning where we piece together smaller plays of small rewards to result in larger plays. Dynamic plays and flow still have small plays where there are best outcomes that need to be trained. Dynamic drills and warmups prepare and reinforce behaviors during these times. They also help build models that can respond to limited input variables as decisions must be made more quickly to keep the disc moving.
As an aside, think about what chemistry between players really is. It’s all of them having models that work together so that when they have the same inputs they work together to produce a more desireable output. This happens through practice and playing together over many years or it can happen because players are excellent learners and are able to shift their models.
I’ve spoken about introspection and adjusting your model. Coaches can assist with this as well but there are some nuances.
In neural networks we back propagate errors and adjust weights based upon a learning rate (another over simplification but as an aside, the work done by Dr. Hinton at University of Toronto in the mid-2000s resulted in a breakthrough for deep belief networks). What does this mean to you? What sort of feedback process works for you?
I’ve seen people beat themselves up over a mistake and then get worse, most have improved. Some people crack a joke and improve, most others not. You have to find what your learning rate is and how to reward or punish your successes and mistakes. I use a personal rule of 3 strikes where if I make a mistake three times in a game I pull myself out. It’s very rare but I take a moment to adjust, think through the plays, the decision making, and adjust my pace or other factors to match the models that I’m used to. I also try to play through every point I’ve played right after the point (unless I’m going right back out) to analyze and adjust.
Coaches also need to recognize how their players respond and this is what makes good coaches and trainers vs not. How a player accepts feedback and best responds to it is important. Similarly a player needs to learn what their best input mechanisms are and communicate that to coaches. By making small incremental changes and refining this over time you’ll be able to optimize how quickly your internal model learns from training.
There are many pitfalls when training a network that results in poor performance on real data. One of those is when a small change or noise on the inputs causes a large error on the outputs, this is called overfitting. Overfitting is typically caused by too small a training set (not enough practice), noise in the input (not appropriately evaluating the conditions that caused the output), or too many variables/features (not focusing on the right things).
An analogy to overfitting in Ultimate is plateauing. Players reach a point where they aren’t making measurable improvement. Part of that is due to them learning a new skill and are then stuck. This might be stuck in skill improvement or their abilities in a game. I’ve found that they tend to be concentrating on too many factors while also not having the experience to get to the next level. The best players train continuously and recognize that they must refine their model in very specific ways. The players that have plateaued tend to be scattered and working on too many things, or not at all, which is reducing their concentration on particular elements.
In learning networks there is considerable effort spent refining machines and adjusting learning rates, various coefficients, depth of networks, size of hidden layers, and a many other things. A single player cannot be expected to adjust their own performance in this way and that’s where they must focus on particular items and have coaches that assist them in the others.
I hope that this post has tried to explain neural networks in a simple manner and also how you can improve your own game. There’s plenty of material on coaching that encapsulates what I’ve said above but players tend not to read the same material.
Try to focus on getting to practice, focusing on specific parts of your game to minimize plateaus, getting help, and understanding how you best receive and process feedback. There is a nugget of truth to the 10,000 hours of practice shall make you an expert. I’ll go further to say that if you train and are coached to develop your internal models you can become a better player in far less time.
Remember that you are developing a specific internal model for specific purposes. You need different models for different positions in the game and you need to develop systems that combine them. That’s what the elite players are doing and you are also capable of it. You can be an Ultimate player that can play any position but you have to get there by playing and becoming good at every position.
Thank you for reading and I’m happy to have further discussions on this. In honesty, you’ll probably get better advice on your game from your coaches, just try to be open to their input and work with them to improve your game.
In the same line of thinking about checkout Melissa Witmer’s article on specific training for Ultimate vs general fitness.