From an general viewpoint, and adding awareness to the ever-growing soccer public of the United States, yes… Expected Passes and Expected Goals add value.
They’ve made their way into the #soccer #stats genre and most recently Expected Goals has appeared in many United States soccer national TV productions.
I suppose this is a good thing as they offer some interesting graphics (eye-candy) to help new followers begin to learn the game but for those of us who understand the game they’re more #fakenews than anything else; kinda like the Audi Player Index.
A bit of flash/click bait that offers something but really nothing; more harshly offered: #fakenews.
So why am I publicly lambasting some pretty exception statistical modeling by some very smart guys? Here’s why:
From a personal standpoint, over four years ago, when I first started developing Possession with Purpose analysis I sat down with Caleb Porter for nearly an hour to discuss the value of statistics.
He imparted to me there’s plenty of information out there – the goal is to filter through the gloss and come up with analytical tools coaches can use (not only in the off-season, but during the season) that will lend value to what gets trained week-in and week-out as you prepare for each game.
Listening to what Caleb Porter offered was good enough for me, but if you need other (statistical/technical) reasons to know why Expected Passes and Expected Goals are flawed – not only from a coaching perspective but from a general knowledge perspective read on.
In my first article on passing statistics (May 2014) I provided clear evidence that global soccer statistic web-sites, like Squawka.com, Whoscored.com and MLSSoccer.com all identify AND publish different passing totals for the same games.
In my example (for the same game) the MLS chalkboard showed one team had completed 434 passes, while MLS Statistics indicated 369 completed passes, versus Squawka indicating 356 completed passes, and Whoscored indicated 412 completed passes.
That inconsistency appeared time and again between these three web-sites even though all those web-sites use the F-24 data thread developed/provided by OPTA.
The reason for the numerical differences is how those web-sites define ‘passes’.
Do they include headers, through-balls, throw-ins, and crosses as passes (F-24 tracks those separately); some web-sites include some of those and others don’t.
For me, ALL of those actions are passes, and the reason why is they are used to ‘define’ movement of the ball from one location to another location without dribbling.
When quantifying ‘expected passes’ are ALL types passes used, if yes, are they weighted equally? I’d offer it’s far easier to make a successful throw-in from anywhere on the pitch than it is to offer a successful cross.
And, if using different web-sites, for different leagues analyzed, are the same equations used and are the exact same types of passes included in one web-site the same types of passes counted in the other web-site?
What about passes that are made simply for the sake of opening up the defensive unit?
As a soccer coach I instruct/direct players to make passes, knowing they will be unsuccessful, in order to stretch the back four or relieve up-pitch pressure.
ALL of those passes are made “knowing” ahead of time they are unlikely to be completed.
When quantifying ‘expected passes’ (a statistic built on successful passes) the calculation penalizes players for unsuccessful passes even though there was specific intent in offering those passes.
Soccer is not a game where one team plays on the pitch “without” being impacted by the opponent – it’s two teams trying to gain possession, keep possession, move the ball, and score a goal…
Meaning passes attempted are a function of what the opponent gives as much as what you try to take as a team.
- If the opponent plays a low-block passes outside the attacking final third are inherently easier to complete than those within the attacking final third.
- If an opponent plays high pressure – passes outside the attacking final third are inherently harder to complete than when playing a team who bunkers.
- You get my drift, yes?
When quantifying ‘expected passes’ the calculation ignores the defensive ‘team’ alignment of the opponent.
What about taking into account the location of the opponent in relation to where the pass is offered?
Soccer statistics don’t qualify whether or not the player completing the pass was being hindered (closely marked by a defender) versus in open field.
When quantifying ‘expected passes’ the calculation ignores the position of the opponent relative to the player making the pass.
What about taking into account the field conditions?
Recall the game against Costa Rica a few years ago in Colorado – the pitch was covered in snow.
How about a game played on field turf, or a narrow pitch, or an extremely wide pitch.
What about a game played with high winds, or a water-logged pitch, or a game played in excessive heat?
When quantifying ‘expected passes’ the calcuation ignores the pitch condition and how those conditions impact movement of the ball.
Statistics have value when measured in a completely controlled environment.
While there are parts of the game that are controlled, the majority of a soccer game, when played within the rules of law, are uncontrolled. Its’ non-stop, in your face action, split into 45+ minutes halves.
I’ve heard many head coaches, and offered these thoughts too, after a game:.
- “We controlled the game, there were times where the opponent had a bit of control, but at the end of the day we got our three points because we controlled more of the game than they did.”
- “Although we didn’t control the entire game we came away with a draw, and when playing a team of that caliber, or an away game in this atmosphere, a draw was almost as good as a win; I’m happy with the result.”
When quantifying ‘expected passes’ the calculation ignores whether a team controlled or failed to control a game.
- Some shots taken ARE taken for the sake of taking a shot to ‘test the keeper’.
- Some shots are taken ‘early on’ simply to ‘show’ that a player isn’t afraid to take a shot from outside the box when given the time and space to do so.
- Some shots are taken in ‘heavy traffic’.
- Some shots, taken from the exact same location as those previously taken in ‘heavy traffic’, are taken with no defenders near by.
- Some shots, taken from the exact same location, are taken with the left foot of a right footed player, or vice versa, or with the instep, or laces, or outside part of the boot, or head, or chest, or knee, or hand….????
- Some shots, taken from the exact same location, are taken on a water-logged pitch or some other weather condition that impacts ball movement.
- Some shots are taken, late on, that have absolutely no value relative to the score-line at that time. In other words a goal scored when down 4-nil or up 4-nil really doesn’t matter with five minutes left..
Last, and certainly not the least, but perhaps THE most important – not all teams show the best correlation (r) of goals scored relative to points earned – some teams show shots taken, or shots on goal, or my Total Soccer Index as having the best correlation to points earned.
So, when quantifying ‘expected goals’ it completely fails to recognize that not all teams behave/perform the same way on the pitch – therefore one ‘event-based’ statistic simply CANNOT be relied upon to predict every teams’ future results.
Finally, The only shot taken that is ALMOST exactly the same (truly repeatable), with respect to player positioning, is a Penalty Kick.
- And even those can be deceptive given the pressure a player feels if the PK comes during the World Cup versus a domestic game that has little value to points in the league table.
Every weakness offered about ‘expected passes’ applies to ‘expected goals’ with one exception – all soccer statistic web-sites accurately count a goal scored the same way.
When quantifying any ‘expected statistics’ if those statistics don’t account for all those conditions, offered above, they are dangerously flawed.
Bottom line at the bottom.
Expected statistics don’t tell me:
- Anything I really need to know as a head coach in order to make my team better on the pitch, and
- Anything I probably don’t already know about the players on my team and whether or not they are good at passing or shooting.
In the old days these would probably be classified as ‘red herrings’…
In modern day terminology I’d offer they are #FakeNews.
It’s shameful national TV stations use statistics like these when they really aren’t anything more than background noise.
Good advice is often ignored – that doesn’t mean it shouldn’t be offered.