|
Post by 33coach on Dec 30, 2015 10:35:22 GMT -6
I think everyone here is in agreement that information is only as good as what you do with it. Every time I see something like "pass concepts encountered during a season" or "zone targeted for highest completion percentage" it is followed by "That's film evaluation, not metrics." So, I don't know what metrics are. I do think some data is far more useful than other data. So, I can understand when coaches laugh over some young guy boasting about his points per possession. But, I also understand how someone could use that information to determine if his defense is still as effective as it used to be, but other factors are effecting the total score. One thing I think that needs to get clarified is the idea that guys who like to accrue data are spending hours in a dungeon with an abacus figuring this stuff out. I can pull up some of these obscure metrics that I don't even care about in a matter of seconds just by using Excel or Hudl. I'm not sacrificing hours in the weight room with my team by looking up points per possession. I'm taking the total number of points allowed on the season and dividing them up by a single number of possessions during a game or a season. It takes five minutes tops. If I wanted to know how many yards my running back got on days where he ate a ham sandwich versus days he didn't eat a sandwich at all, it takes a couple of stupid questions and two minutes of long addition. Can we stop pretending that pro-metrics guys are spending weeks at a time figuring this stuff out? my feeling is, and this coming from someone who does tons of data analysis (buzz word - "big data" transaction), is that anytime you are reviewing film that is nothing more then a data input stream, from there it is your job to analyze, and report on that data VS the KPI's (Key Performance Indicators) you determined for your program - you should get percentages back based on those KPI targets...that determines the success of your program. sure you could say "well the scoreboard tells us the success" - but what if you played all the worst teams in the state for your league. how do you know that you are any good? while you are blowing people out 60-0, your playoff opps might have been able to blow them out 100 - 0 given the chance. KPI's are how you determine whether you are a cupcake that got a lucky schedule, or you are a force to be reckoned with...
|
|
|
Post by coachd5085 on Dec 30, 2015 11:17:36 GMT -6
I think everyone here is in agreement that information is only as good as what you do with it. Every time I see something like "pass concepts encountered during a season" or "zone targeted for highest completion percentage" it is followed by "That's film evaluation, not metrics." So, I don't know what metrics are. I do think some data is far more useful than other data. So, I can understand when coaches laugh over some young guy boasting about his points per possession. But, I also understand how someone could use that information to determine if his defense is still as effective as it used to be, but other factors are effecting the total score. One thing I think that needs to get clarified is the idea that guys who like to accrue data are spending hours in a dungeon with an abacus figuring this stuff out. I can pull up some of these obscure metrics that I don't even care about in a matter of seconds just by using Excel or Hudl. I'm not sacrificing hours in the weight room with my team by looking up points per possession. I'm taking the total number of points allowed on the season and dividing them up by a single number of possessions during a game or a season. It takes five minutes tops. If I wanted to know how many yards my running back got on days where he ate a ham sandwich versus days he didn't eat a sandwich at all, it takes a couple of stupid questions and two minutes of long addition. Can we stop pretending that pro-metrics guys are spending weeks at a time figuring this stuff out? my feeling is, and this coming from someone who does tons of data analysis (buzz word - "big data" transaction), is that anytime you are reviewing film that is nothing more then a data input stream, from there it is your job to analyze, and report on that data VS the KPI's (Key Performance Indicators) you determined for your program - you should get percentages back based on those KPI targets...that determines the success of your program. sure you could say "well the scoreboard tells us the success" - but what if you played all the worst teams in the state for your league. how do you know that you are any good? while you are blowing people out 60-0, your playoff opps might have been able to blow them out 100 - 0 given the chance. KPI's are how you determine whether you are a cupcake that got a lucky schedule, or you are a force to be reckoned with... coachphillip I realize that this discussion has been pretty vague regarding the definition of "metric" I guess in my mind, it is like the supreme court's definition of porn. To try and help though, I guess I would say that the ones I referred to in the OP were examples where numerical information was used to produced other numerical information. Like "pts per possesion" or "pts per yard"-- maybe some of those "efficiency" stats etc. Things where you are using results, to compile more information as opposed to compiling data and analyzing things you can visually describe. 33coach can you tell me what KPI's or whatever other metric/numerical data set would provide you with the information you are looking for in the highlighted part of the post? I would argue that is PRECISELY the weakness of "measurement metrics". In the situation you described, wouldn't almost ANY "metric" other than grading alignment, assignment, technique, and hustle just describe other ways you obliterated a team??
|
|
|
Post by 33coach on Dec 30, 2015 11:27:51 GMT -6
my feeling is, and this coming from someone who does tons of data analysis (buzz word - "big data" transaction), is that anytime you are reviewing film that is nothing more then a data input stream, from there it is your job to analyze, and report on that data VS the KPI's (Key Performance Indicators) you determined for your program - you should get percentages back based on those KPI targets...that determines the success of your program. sure you could say "well the scoreboard tells us the success" - but what if you played all the worst teams in the state for your league. how do you know that you are any good? while you are blowing people out 60-0, your playoff opps might have been able to blow them out 100 - 0 given the chance. KPI's are how you determine whether you are a cupcake that got a lucky schedule, or you are a force to be reckoned with... coachphillip I realize that this discussion has been pretty vague regarding the definition of "metric" I guess in my mind, it is like the supreme court's definition of porn. To try and help though, I guess I would say that the ones I referred to in the OP were examples where numerical information was used to produced other numerical information. Like "pts per possesion" or "pts per yard"-- maybe some of those "efficiency" stats etc. Things where you are using results, to compile more information as opposed to compiling data and analyzing things you can visually describe. 33coach can you tell me what KPI's or whatever other metric/numerical data set would provide you with the information you are looking for in the highlighted part of the post? I would argue that is PRECISELY the weakness of "measurement metrics". In the situation you described, wouldn't almost ANY "metric" other than grading alignment, assignment, technique, and hustle just describe other ways you obliterated a team?? opposing stats would be able to tell you how good you really are for example: if you win by shutout, but your opponent averages: -- 3 YPC (yards per carry) -- 5 YAC (yards after catch) then couple that with special teams stats: -- if the average Opps return is 30+...that tells me you have a problem before getting to individual grades, i already know where i need to look the hardest for coaching points. thats the power of metric analysis. its not so much looking at your own...its looking at your opponents vs you.
|
|
|
Post by spos21ram on Dec 30, 2015 12:36:59 GMT -6
Looking at opponent's stats vs. other teams, then comparing them to the game they played against you would only work if you had all common opponents. But what about weather or injuries in their other games? Players academically ineligible for a portion of their season? Once again, tons of variables... But then again I'm asking myself why would i care about those comparisons? If I just beat a team 50-0 that sucked against the rest of the schedule or dominated their schedule....how does that help my team going forward?we just beat them 50-0.
If I were to do that type of comparison it would be for my personal entertainment so I can BS with the coaches....hey we really just destroyed a good team, or don't get too confident boys, my statistical analysis concluded that they blow.
Sent from my SAMSUNG-SM-G900A using proboards
|
|
|
Post by 33coach on Dec 30, 2015 12:44:25 GMT -6
Looking at opponent's stats vs. other teams, then comparing them to the game they played against you would only work if you had all common opponents. But what about weather or injuries in their other games? Players academically ineligible for a portion of their season? Once again, tons of variables... But then again I'm asking myself why would i care about those comparisons? If I just beat a team 50-0 that sucked against the rest of the schedule or dominated their schedule....how does that help my team going forward?we just beat them 50-0. If I were to do that type of comparison it would be for my personal entertainment so I can BS with the coaches....hey we really just destroyed a good team, or don't get too confident boys, my statistical analysis concluded that they blow. Sent from my SAMSUNG-SM-G900A using proboards i would only look at them vs us, not them vs others. it helps the team going forward by knowing where your downfalls are, even in a 50-0 game, you still made mistakes on a systematic level...and overlooking those is dangerous. 8-0 doesnt mean perfect.
|
|
|
Post by fantom on Dec 30, 2015 16:19:54 GMT -6
Looking at opponent's stats vs. other teams, then comparing them to the game they played against you would only work if you had all common opponents. But what about weather or injuries in their other games? Players academically ineligible for a portion of their season? Once again, tons of variables... But then again I'm asking myself why would i care about those comparisons? If I just beat a team 50-0 that sucked against the rest of the schedule or dominated their schedule....how does that help my team going forward?we just beat them 50-0. If I were to do that type of comparison it would be for my personal entertainment so I can BS with the coaches....hey we really just destroyed a good team, or don't get too confident boys, my statistical analysis concluded that they blow. Sent from my SAMSUNG-SM-G900A using proboards i would only look at them vs us, not them vs others. it helps the team going forward by knowing where your downfalls are, even in a 50-0 game, you still made mistakes on a systematic level...and overlooking those is dangerous. 8-0 doesnt mean perfect. Yeah, but those stats change the way that you plan to do things?
|
|
|
Post by 33coach on Dec 30, 2015 16:28:40 GMT -6
i would only look at them vs us, not them vs others. it helps the team going forward by knowing where your downfalls are, even in a 50-0 game, you still made mistakes on a systematic level...and overlooking those is dangerous. 8-0 doesnt mean perfect. Yeah, but those stats change the way that you plan to do things? they should, you obviously have a glaring issue in your defense if a team you were able to beat so badly found something that was working.
|
|
|
Post by spos21ram on Dec 30, 2015 16:45:44 GMT -6
Looking at opponent's stats vs. other teams, then comparing them to the game they played against you would only work if you had all common opponents. But what about weather or injuries in their other games? Players academically ineligible for a portion of their season? Once again, tons of variables... But then again I'm asking myself why would i care about those comparisons? If I just beat a team 50-0 that sucked against the rest of the schedule or dominated their schedule....how does that help my team going forward?we just beat them 50-0. If I were to do that type of comparison it would be for my personal entertainment so I can BS with the coaches....hey we really just destroyed a good team, or don't get too confident boys, my statistical analysis concluded that they blow. Sent from my SAMSUNG-SM-G900A using proboards i would only look at them vs us, not them vs others. it helps the team going forward by knowing where your downfalls are, even in a 50-0 game, you still made mistakes on a systematic level...and overlooking those is dangerous. 8-0 doesnt mean perfect. I fully understand the need to always improve no matter the scores of games. Our freshmen team won every game by over 40 points this season. I found plenty of mistakes in the film for us to correct without using any stats or numbers (we don't keep freshmen stats). Not to my surprise at all though because no matter how great a freshmen team is, they're only 14-15 years old. They're gonna make mistakes....but that's not my point....We both come to the same conclusion for our teams, just using different styles or methods I guess you could say. Sent from my SAMSUNG-SM-G900A using proboards
|
|
|
Post by coachd5085 on Dec 30, 2015 17:27:49 GMT -6
Yeah, but those stats change the way that you plan to do things? they should, you obviously have a glaring issue in your defense if a team you were able to beat so badly found something that was working. Again though, the things you are talking about would BEST be discovered, and addressed via film review. That is my contention here. That "data" sets that you are describing don't show the mistakes, it shows the RESULT of those mistakes. I maintain that something is wrong with the process if the staff is relying on secondary sources as opposed to primary sources to find the vast vast vast VAST majority of needed corrections.
|
|
|
Post by 33coach on Dec 30, 2015 17:34:28 GMT -6
they should, you obviously have a glaring issue in your defense if a team you were able to beat so badly found something that was working. Again though, the things you are talking about would BEST be discovered, and addressed via film review. That is my contention here. That "data" sets that you are describing don't show the mistakes, it shows the RESULT of those mistakes. I maintain that something is wrong with the process if the staff is relying on secondary sources as opposed to primary sources to find the vast vast vast VAST majority of needed corrections. the only data source we have is film so im not sure where you think im getting the data initially. i think this is the point of contention. evaluating data does not replace film it enhances the outcome. 1) watch the film - self scout (the process i described above) 2) watch the film - individual evaluation 3) plan based on the data derived from the 2 film reviews. thats using metrics in practice... now you can take it a step further and graph the data to get a game over game analysis.
|
|
|
Post by natenator on Dec 30, 2015 17:48:10 GMT -6
I think everyone here is in agreement that information is only as good as what you do with it. Every time I see something like "pass concepts encountered during a season" or "zone targeted for highest completion percentage" it is followed by "That's film evaluation, not metrics." So, I don't know what metrics are. I do think some data is far more useful than other data. So, I can understand when coaches laugh over some young guy boasting about his points per possession. But, I also understand how someone could use that information to determine if his defense is still as effective as it used to be, but other factors are effecting the total score. One thing I think that needs to get clarified is the idea that guys who like to accrue data are spending hours in a dungeon with an abacus figuring this stuff out. I can pull up some of these obscure metrics that I don't even care about in a matter of seconds just by using Excel or Hudl. I'm not sacrificing hours in the weight room with my team by looking up points per possession. I'm taking the total number of points allowed on the season and dividing them up by a single number of possessions during a game or a season. It takes five minutes tops. If I wanted to know how many yards my running back got on days where he ate a ham sandwich versus days he didn't eat a sandwich at all, it takes a couple of stupid questions and two minutes of long addition. Can we stop pretending that pro-metrics guys are spending weeks at a time figuring this stuff out? my feeling is, and this coming from someone who does tons of data analysis (buzz word - "big data" transaction), is that anytime you are reviewing film that is nothing more then a data input stream, from there it is your job to analyze, and report on that data VS the KPI's (Key Performance Indicators) you determined for your program - you should get percentages back based on those KPI targets...that determines the success of your program. sure you could say "well the scoreboard tells us the success" - but what if you played all the worst teams in the state for your league. how do you know that you are any good? while you are blowing people out 60-0, your playoff opps might have been able to blow them out 100 - 0 given the chance. KPI's are how you determine whether you are a cupcake that got a lucky schedule, or you are a force to be reckoned with... And this is exactly why college recruiters don't care about HS player metrics as they don't translate. My fall team of 13/14 year olds were a powerhouse. Put them in our summer league and they are middle of the road. I didn't need to calculate metrics to know that. I just needed a decent pair of eyes and half a brain. I'm still waiting on someone to tell me what KPI's not obtained through film analysis has led to actionable outcomes that improved their results (the TRUE intent of KPI's). No one has done it so far.
|
|
|
Post by 33coach on Dec 30, 2015 17:52:39 GMT -6
my feeling is, and this coming from someone who does tons of data analysis (buzz word - "big data" transaction), is that anytime you are reviewing film that is nothing more then a data input stream, from there it is your job to analyze, and report on that data VS the KPI's (Key Performance Indicators) you determined for your program - you should get percentages back based on those KPI targets...that determines the success of your program. sure you could say "well the scoreboard tells us the success" - but what if you played all the worst teams in the state for your league. how do you know that you are any good? while you are blowing people out 60-0, your playoff opps might have been able to blow them out 100 - 0 given the chance. KPI's are how you determine whether you are a cupcake that got a lucky schedule, or you are a force to be reckoned with... I'm still waiting on someone to tell me what KPI's not obtained through film analysis has led to actionable outcomes that improved their results (the TRUE intent of KPI's). No one has done it so far. how would you obtain anything not through film? KPIs are not obtained. they are set. for example - lets say at the start of the year looking at my opps i say ok "Key Performance Indicator for our offense is 3.5 yards per carry". then based on games and film review, you see where you measure based on that goal - if you are above goal you didnt set the bar high enough, if you are below goal something is wrong (could be personnel, technique, etc). you can think of KPI's as yearly goals based on a hypothesis (and im using the scientific definition of hypothesis)
|
|
|
Post by natenator on Dec 30, 2015 18:16:59 GMT -6
I'm still waiting on someone to tell me what KPI's not obtained through film analysis has led to actionable outcomes that improved their results (the TRUE intent of KPI's). No one has done it so far. how would you obtain anything not through film? KPIs are not obtained. they are set. for example - lets say at the start of the year looking at my opps i say ok "Key Performance Indicator for our offense is 3.5 yards per carry". then based on games and film review, you see where you measure based on that goal - if you are above goal you didnt set the bar high enough, if you are below goal something is wrong (could be personnel, technique, etc). you can think of KPI's as yearly goals based on a hypothesis (and im using the scientific definition of hypothesis) KPI's are measures that help us understand how we're doing against our objectives. The objective is to win games. Not YPC, not PPD, not points for or against (though these two are probably better measures than all others). You can have 15YPC and still lose every game (because you couldn't stop a nose bleed). You could have 95% tackling efficiency but lose every game (because players were always out of position and tackles were made 20 yards down the field). You could have 50 points against and 75 points for and still lose most of your games.
|
|
|
Post by 33coach on Dec 30, 2015 18:22:27 GMT -6
how would you obtain anything not through film? KPIs are not obtained. they are set. for example - lets say at the start of the year looking at my opps i say ok "Key Performance Indicator for our offense is 3.5 yards per carry". then based on games and film review, you see where you measure based on that goal - if you are above goal you didnt set the bar high enough, if you are below goal something is wrong (could be personnel, technique, etc). you can think of KPI's as yearly goals based on a hypothesis (and im using the scientific definition of hypothesis) KPI's are measures that help us understand how we're doing against our objectives. The objective is to win games. Not YPC, not PPD, not points for or against (though these two are probably better measures than all others). You can have 15YPC and still lose every game (because you couldn't stop a nose bleed). You could have 95% tackling efficiency but lose every game (because players were always out of position and tackles were made 20 yards down the field). You could have 50 points against and 75 points for and still lose most of your games. the opposite is also true: you can win every game and give up 15 YPC. you can win every game and have 5% tackling efficiency and those are the situations where metrics matter. because while you are going "HEY THE SCOREBOARD SAYS WE ARE THE BEST TEAM EVER" the stats say you won because the teams you played handed you the game. winning is the objective of the game, not the objective of a program. a programs objective is to build the best football players & citizens you can. building a program requires proper KPI's and measuring against those KPI's. some of the best programs ive ever seen were .500 in season.
|
|
|
Post by fantom on Dec 30, 2015 20:21:32 GMT -6
I'm still waiting on someone to tell me what KPI's not obtained through film analysis has led to actionable outcomes that improved their results (the TRUE intent of KPI's). No one has done it so far. how would you obtain anything not through film? KPIs are not obtained. they are set. for example - lets say at the start of the year looking at my opps i say ok "Key Performance Indicator for our offense is 3.5 yards per carry". then based on games and film review, you see where you measure based on that goal - if you are above goal you didnt set the bar high enough, if you are below goal something is wrong (could be personnel, technique, etc). you can think of KPI's as yearly goals based on a hypothesis (and im using the scientific definition of hypothesis) But does it change anything that you do? We used to keep a goals chart but we stopped because we decided that it was a waste of time. For example, one of our defensive goals was to hold the opponents to under 3 yards per rush. But what if we didn't? It wasn't because we didn't try. We practiced run defense. We game planned to stop the run. We called defenses that we thought were best to stop the run. We just couldn't stop it. So, what good did that "metric" do?
|
|
|
Post by Chris Clement on Dec 30, 2015 21:30:56 GMT -6
It's important to know what you want to measure, how to measure it, and what that measurement means. If you don't understand the math don't worry about it. Football is not yet sufficiently well-modelled for these things to be critical, and it won't affect high schools for several years (if you're an NFL team without a guy who understands how to do EPA and WPA calculations you're literally costing yourself games).
Here's an example. I can prove, objectively, that if we make less than 7 yards on 1-10 we are in an objectively worse position on 2nd down, insofar as converting a fresh set of downs. That is, our first down play was a failure. I can also prove that seven yard number to be reliable for many different teams, different styles of offense, different calibres of offense, so the argument "my team is 'different'" doesn't hold water. Your team would have to be an incredible outlier for this to matter.
So, I can now look at my plays that I use on 1-10. I can determine the probability of each play getting me that 7 yards, and the average gain of each of those plays. If IZ is never getting me 7 yds, it's hurting me every time I call it on 1-10. Stop doing that, you're hurting yourself. You may think you're "setting up something else," but you're not, it's a Nash equilibrium, a stats thing that was built around these kind of real-life situations, so it's not just some theory.
I could go on for hours, but basically there are situations where people's natural biases, which are nearly impossible to consciously set aside, and people's misconceptions about how the game works, causes them to make some bad decisions that can be identified and hopefully fixed.
There's another example using the same math from above, I can determine when to accept or decline certain penalties, like whether 2-10 is preferable to 1-20, or 1-5 vs 2-2. These are usually penny ante decisions, but I can put it out in a really simple graph that anyone can read and squeeze out a tangible, if small, advantage,
|
|
|
Post by fantom on Dec 30, 2015 21:55:31 GMT -6
Here's an example. I can prove, objectively, that if we make less than 7 yards on 1-10 we are in an objectively worse position on 2nd down, insofar as converting a fresh set of downs. That is, our first down play was a failure. I can also prove that seven yard number to be reliable for many different teams, different styles of offense, different calibres of offense, so the argument "my team is 'different'" doesn't hold water. Your team would have to be an incredible outlier for this to matter. That's in Canada. Would it be the same in the U.S.?
|
|
|
Post by Chris Clement on Dec 30, 2015 22:26:16 GMT -6
No, the numbers would obviously be different, but they do exist. Actually, the old 4-3-3 model is a reasonable approximation using NFL numbers.
|
|
|
Post by lochness on Dec 30, 2015 23:31:58 GMT -6
How have you applied a metric to measurably improve your team in a way that couldn't be observed through watching film? This is the question nobody has answered yet in four pages. I don't mean to insult anyone's methods...but it just seems like a waste of time designed to make everyone feel like a hard worker who is on the cutting edge as opposed to something that produces actual meaningful results for a program. The data that I look at has the biggest impact on how we do things the following year are the frequency with which we saw certain run and pass concepts the previous year. This has a huge impact on the structure of our spring practices because I wasn't tp practice against the stuff people were actually calling against us. I'll give you an example of something we actually changed based on this data collection. We are a Cover 3 defense so we have always operated under the assumption that we should spend our spring and summer working on defending 4 verticals and Flood concepts a ton, because those are some of your common "cover 3 beaters". As it turns out, we rarely see Flood anymore for whatever reason but that trend has held for several years, so we quit spending a bunch of time on it. We also almost never saw 4 verticals out of 2X2 so we don't spend nearly as much time on that as we do 3X1 verticals. I don't know that I can prove that this made us a better team any more than we could prove that practicing on a Tuesday helps you get better for a Friday game, but I think we'd all agree that practicing against the plays you see is a good idea, that's why we watch scout film. Yeah, absolutely that's good stuff. But, I'd definitely consider that good scouting / opponent analysis rather than number crunching. Seems like there's some confusion between all of us regarding what the definition of "metrics" is. We scout the heck out of people and ourselves. We look at trends just like you describe here. All good stuff. That's not the activity I was questioning though. I question the guys who try to look at the measures like "points per drive" or "play efficiency" because to me there's just no gain to be had there. Even play efficiency is a weak measure. The play may have sucked because we were bad at blocking it that particular year. Maybe teams were loaded up against it that season. There are just too many variables to draw a crisp conclusion for action. In fact, even more dangerously, is you DO draw a conclusion that isn't really based in fact. Like I said in another thread, I think you can use these numbers in football to say anything you want, and then justify your decisions by claiming that your decision was based on an extensive analysis. When, in fact, because of the nearly infinite number of variables affecting performance, the conclusion you are drawing may not be fact at all.
|
|
|
Post by lochness on Dec 30, 2015 23:39:05 GMT -6
how would you obtain anything not through film? KPIs are not obtained. they are set. for example - lets say at the start of the year looking at my opps i say ok "Key Performance Indicator for our offense is 3.5 yards per carry". then based on games and film review, you see where you measure based on that goal - if you are above goal you didnt set the bar high enough, if you are below goal something is wrong (could be personnel, technique, etc). you can think of KPI's as yearly goals based on a hypothesis (and im using the scientific definition of hypothesis) But does it change anything that you do? We used to keep a goals chart but we stopped because we decided that it was a waste of time. For example, one of our defensive goals was to hold the opponents to under 3 yards per rush. But what if we didn't? It wasn't because we didn't try. We practiced run defense. We game planned to stop the run. We called defenses that we thought were best to stop the run. We just couldn't stop it. So, what good did that "metric" do? That's exactly where we went. All those "goals" are meaningless. They become stuff we talk about on Monday. It doesn't change anything that we do, though. It's not like we go out suddenly in week 3 and say "Hey, don't forget we have a goal of holding this QB to less than 40% completions...so...I guess we really need to play great pass defense this week" because we try to play great pass defense every week. We are not going to do anything materially different...so what is it other than a conversation piece?
|
|
|
Post by Chris Clement on Dec 31, 2015 7:57:07 GMT -6
That's where it becomes important to know the difference between predictive and descriptive statistics. Knowing that averaging 3ypc leads to losing is descriptive, and not actionable. Knowing that my kicker is 44% from this distance and hash but improves to 60% if we get on the opposite hash is predictive and actionable, and I get those numbers by tracking every kick in practice. Many games have been lost because coaches have assumed they were in "FG range," when really they were looking at a 40% chance or whatever.
You can think you know many things, but you also might be wrong. That seven yard figure was revolutionary. If you asked OCs around the country what they figured they needed to get on first down to maintain their chances of converting you'd get almost everyone saying 5-6 yds, but even losing one yard hurts you and two yards hurts quite a bit. You do these calculations subconsciously even if you don't realize it. Faced with 1-10 your mind searches for a play you expect to be successful given the circumstances. That's why you're not often seeing verts on 3-2. So if you're looking for a 5 yard play instead of a 7 yard play you will choose plays not necessarily poorly, but inefficiently.
I don't go into games with charts and crap, but I bring my penalty graph and my improved understanding.
Another big discovery, 3-4 converts at a much higher rate than 3-3. Coaches are binning their call sheets so they have 3-1/3 and 3-4/6 so the 3-3 is at the upper limit of the range. Now they call plays really meant for two yards, and they underestimate how difficult it is to get three yards, and overestimate how often their base runs get three yards against a loaded box. On 3-4 coaches call plays designed to get 4-6 yards.
Finally, 2pt converts from the 5 are far more successful than 3-5 situations, despite being indistinguishable by any logic. Why? Because coaches carry a 2pt play in their pocket, and because they see the target as the end zone, so they'll use the whole end zone in their plan, but on 3-5 they just see a line that they have to get to, they never think of going beyond it. Many 2pt conversions are completed 10+ yds into the end zone, but very rarely is a 3-5 complete for a gain of 15.
I don't have fancy models, I only use complicated excel stuff because I can do it faster that way, but it's possible to do all of this by hand if you wanted. I just know as a proven scientific fact that people are wrong about stuff all the time for stupid reasons, they get stuck in their mindset, and I shouldn't take for granted anything which can be tested.
A lot of this can be tied into film as well. I took the above numbers and gave them context by watching many 2pt conversions and 3-5 situations, notepad in hand.
Last offseason our OC was talking about one of our top concepts of the past few years. I ran a quick report that anyone here could do in five minutes with hudl and found that the play in question was actually kind of a turd. It had less ypa and lower completion % than average, and was the worst of our concepts with similar design and intent. This started a discussion and an investigation. It was a post-dog type play, and in reviewing it we found that when we switched two guys' assignments it wrecked the timing. The dig didn't hit the window in time, so the QB had to give a little half hitch, by which point it was too late. We THOUGHT it was a great play because across two QBs and two receivers we'd often seen great plays from it, but it was a trick of the mind. The great plays were in fact great catches because the slot had to either dive for a ball thrown away from coverage or stand and take a nasty hit, so we associated those catches with the concept, even though it was usually just incomplete in the dirt. With the help of a stopwatch we found we were off by .2 seconds, and one of our PA actions fit perfectly. So we switched the backfield action that went with it and voila.
|
|
|
Post by natenator on Dec 31, 2015 8:15:31 GMT -6
That's where it becomes important to know the difference between predictive and descriptive statistics. Knowing that averaging 3ypc leads to losing is descriptive, and not actionable. Knowing that my kicker is 44% from this distance and hash but improves to 60% if we get on the opposite hash is predictive and actionable, and I get those numbers by tracking every kick in practice. Many games have been lost because coaches have assumed they were in "FG range," when really they were looking at a 40% chance or whatever. You can think you know many things, but you also might be wrong. That seven yard figure was revolutionary. If you asked OCs around the country what they figured they needed to get on first down to maintain their chances of converting you'd get almost everyone saying 5-6 yds, but even losing one yard hurts you and two yards hurts quite a bit. You do these calculations subconsciously even if you don't realize it. Faced with 1-10 your mind searches for a play you expect to be successful given the circumstances. That's why you're not often seeing verts on 3-2. So if you're looking for a 5 yard play instead of a 7 yard play you will choose plays not necessarily poorly, but inefficiently. I don't go into games with charts and crap, but I bring my penalty graph and my improved understanding. Another big discovery, 3-4 converts at a much higher rate than 3-3. Coaches are binning their call sheets so they have 3-1/3 and 3-4/6 so the 3-3 is at the upper limit of the range. Now they call plays really meant for two yards, and they underestimate how difficult it is to get three yards, and overestimate how often their base runs get three yards against a loaded box. On 3-4 coaches call plays designed to get 4-6 yards. Finally, 2pt converts from the 5 are far more successful than 3-5 situations, despite being indistinguishable by any logic. Why? Because coaches carry a 2pt play in their pocket, and because they see the target as the end zone, so they'll use the whole end zone in their plan, but on 3-5 they just see a line that they have to get to, they never think of going beyond it. Many 2pt conversions are completed 10+ yds into the end zone, but very rarely is a 3-5 complete for a gain of 15. I don't have fancy models, I only use complicated excel stuff because I can do it faster that way, but it's possible to do all of this by hand if you wanted. I just know as a proven scientific fact that people are wrong about stuff all the time for stupid reasons, they get stuck in their mindset, and I shouldn't take for granted anything which can be tested. A lot of this can be tied into film as well. I took the above numbers and gave them context by watching many 2pt conversions and 3-5 situations, notepad in hand. Last offseason our OC was talking about one of our top concepts of the past few years. I ran a quick report that anyone here could do in five minutes with hudl and found that the play in question was actually kind of a turd. It had less ypa and lower completion % than average, and was the worst of our concepts with similar design and intent. This started a discussion and an investigation. It was a post-dog type play, and in reviewing it we found that when we switched two guys' assignments it wrecked the timing. The dig didn't hit the window in time, so the QB had to give a little half hitch, by which point it was too late. We THOUGHT it was a great play because across two QBs and two receivers we'd often seen great plays from it, but it was a trick of the mind. The great plays were in fact great catches because the slot had to either dive for a ball thrown away from coverage or stand and take a nasty hit, so we associated those catches with the concept, even though it was usually just incomplete in the dirt. With the help of a stopwatch we found we were off by .2 seconds, and one of our PA actions fit perfectly. So we switched the backfield action that went with it and voila. There's one small difference between you and the rest (most) of us... You coach at the University level where there is a greater amount of consistency in players ability and play than those of us coaching HS or youth aged football.
|
|
|
Post by coachphillip on Dec 31, 2015 9:10:29 GMT -6
1. Chris, I think this goes back to us defining a metric, which the longer this conversation goes on, the less I think I know about lol. I love what you did with all the data though. Very effective application of information.
2. Natenator, don't do that. The "you coach x, we coach y" argument isn't really applicable here. It's a conversation about the usefulness of metrics and the effects those metrics have on your methodology of coaching football. That's universal.
|
|
|
Post by natenator on Dec 31, 2015 9:59:36 GMT -6
1. Chris, I think this goes back to us defining a metric, which the longer this conversation goes on, the less I think I know about lol. I love what you did with all the data though. Very effective application of information. 2. Natenator, don't do that. The "you coach x, we coach y" argument isn't really applicable here. It's a conversation about the usefulness of metrics and the effects those metrics have on your methodology of coaching football. That's universal. I get what you're saying but I'm not sure I agree entirely. Sure from a conceptual POV it's worthwhile but at some point differences between levels matter in the ability for metrics to become predictive. If you take the FG example, I don't have enough practice time just to get in the fundamentals let alone rep kicks enough for predictability. What I love that Chris highlighted is our own cognitive biases and Heuristics that impede objective decision-making. I talk a lot about this topic when I give presentations on data driven decision making. From that perspective I can see data analysis being a useful endeavor but the keyword here is analysis (full spectrum).
|
|
|
Post by coachphillip on Dec 31, 2015 10:13:05 GMT -6
1. Chris, I think this goes back to us defining a metric, which the longer this conversation goes on, the less I think I know about lol. I love what you did with all the data though. Very effective application of information. 2. Natenator, don't do that. The "you coach x, we coach y" argument isn't really applicable here. It's a conversation about the usefulness of metrics and the effects those metrics have on your methodology of coaching football. That's universal. I get what you're saying but I'm not sure I agree entirely. Sure from a conceptual POV it's worthwhile but at some point differences between levels matter in the ability for metrics to become predictive. If you take the FG example, I don't have enough practice time just to get in the fundamentals let alone rep kicks enough for predictability. What I love that Chris highlighted is our own cognitive biases and Heuristics that impede objective decision-making. I talk a lot about this topic when I give presentations on data driven decision making. From that perspective I can see data analysis being a useful endeavor but the keyword here is analysis (full spectrum). So what you're saying is that the reality is that we deal with lesser/younger athletes, which leads to more inconsistencies in behavior, training, etc and therefore less consistent results. That weakens the case of metrics being as useful at lower levels of play. Is that right?
|
|
|
Post by runitupthemiddle on Dec 31, 2015 10:37:59 GMT -6
i think you guys are taking the use of metrics to the extreme... Im not trying to quantify the game, find the winning formula. i simply want to know matter of factly how often each player on my team screws up their assignment, and what effect that screw up has: if i have 1 DB who screws up an average of 1 out of every 3 snaps, i need to get some individual workouts with him. if all my DB's screw up an average of 1 out of every 2 snaps, i need to reevaluate how the scheme is being taught if all my DB's screw up an average of 2 out of every 3 snaps, i need to reevaluate the scheme BUT i cant do that if i dont have the REAL data in front of me. i can guess...or i can know. both take the same amount of effort. its not rocket surgery...its just basic analysis Can't you figure that out by just watching film? I don't really consider tracking how often someone messes up their assignment as metrics, but that's just me. I have two eyes to see if people are screwing up, not sure how numbers change things besides tallying how many assignments the kids miss. Sent from my SAMSUNG-SM-G900A using proboards They could be messing up because they stay up all night on Twitter and Facebook and ps4 don't eat correctly and then are mentally and physically tired Not to mention they r chasing tail around the school some successful some Not. Oh and did mom and dad fight today? Well that's gonna affect them too. Lots of other things besides stepping with wrong foot or eye looking in the backfield. That could cause them to screw up. That stuff isn't showing up in a hudl report.
|
|
|
Post by jrk5150 on Dec 31, 2015 11:02:06 GMT -6
Can't you figure that out by just watching film? I don't really consider tracking how often someone messes up their assignment as metrics, but that's just me. I have two eyes to see if people are screwing up, not sure how numbers change things besides tallying how many assignments the kids miss. Sent from my SAMSUNG-SM-G900A using proboards They could be messing up because they stay up all night on Twitter and Facebook and ps4 don't eat correctly and then are mentally and physically tired Not to mention they r chasing tail around the school some successful some Not. Oh and did mom and dad fight today? Well that's gonna affect them too. Lots of other things besides stepping with wrong foot or eye looking in the backfield. That could cause them to screw up. That stuff isn't showing up in a hudl report. Yeah, but don't take that too far either. Pros can have fights with their wives, college kids can have exams they just pulled an all-nighter for, etc. and so forth. I think there IS a difference from one level to the next, and some of it is in consistency, sure. But I think it lies more in the availability of reps and the availability of data from other games/teams to provide a large enough sample to be reliable/valid. I think there are still likely to be useful metrics, and Chris' examples show there are useful statistics that exist at sub-NFL/BCS levels. I would imagine his stat around first down yardage is something most HS coaches can do for their own teams, both in-season and over multiple seasons, especially if running a similar offensive system over the years. If you use enough data over time, the differences between individual teams starts to level out and the data becomes relevant. I would guess, however, if you went from DTDW 2 years ago to spread option you might have a problem going back for data... Going back to the baseball comment earlier, where the HS coach (just blanked name, I'm sorry) talked about day/night and RH/LH splits being useless for him - that is correct. But pitch count for an individual pitcher may not be. You might see after 4 starts that Joe Smith falls apart after 100 pitches. Sure, you visibly see him getting tired and having trouble in games, but you may not realize the magic number is that 100 pitches, since he may hit that # in different innings over different games, and may not really show signs of being tired until he's at 115 pitches and already given up 3 runs. Could that lead you to make a decision that perhaps saves you a win? I would say that's certainly a possibility. Again - if you can find metrics that matter to you, and you can apply them in the right context, then they can be useful.
|
|
|
Post by Chris Clement on Dec 31, 2015 13:20:10 GMT -6
Obviously I have the benefit of a more consistent league, and we have 250 games in our system, but even six weeks into the season we can evaluate plays that just aren't working, and even if it's a bread and butter play, we can dump it for the balance of the year, and maybe revisit them in the offseason. You can use it in scouting as well. Maybe they have a play that's a nightmare to defend on paper, some kind of triple RPO, but if you actually include ypa in your scout report, you might see that it's actually not an effective play, they're actually turrible at it. So while you still need to defend it and practice against it, it doesn't need to be the focus of your week.
|
|
|
Post by coachwoodall on Dec 31, 2015 17:11:17 GMT -6
So all the guys on the nay side of this discussion are going to save time and energy by NOT doing stats in the coming season, right?
|
|
|
Post by coachd5085 on Dec 31, 2015 18:14:24 GMT -6
So all the guys on the nay side of this discussion are going to save time and energy by NOT doing stats in the coming season, right? That is the thing. I don't think there is a "nay" side to this discussion. I think there are those who don't see value in certain metrics, and those who do. I guess my grumpy old man thread spurred from a few threads--One asking about every 10 yards of offense being a point, another looking for good "metrics" for success in the post season..etc. Basically, I don't see the need for "metrics" that just describe your results in other ways.
|
|