Tagged: Possession with Purpose

Getting Better as a Youth Soccer Coach

When I was a Soccer Youth Head Coach, in England and America, I sometimes struggled with how to manage the well-intentioned, high level of energy, that parents and/or guardians brought to the Soccer pitch.

At that time I hadn’t concieved my Possession with Purpose analytical approach, but if I had, I would certainly have followed it.

Why, because I think and feel there is great value in understanding some of the basic activities of soccer, mesauring those activities, and using those results to drive improvement.  And the earlier in the development of soccer the better in understanding that while this game is measured by wins, draws, and losses, it isn’t just about scoring goals – it’s about preventing them too.

If you’re an aspiring soccer Head Coach, new or old, I think this approach in leveraging parents/guardians to help you help the team is a great step towards getting better.

If that resonates with you, or even if it doesn’t, I think it’s worthy you take a few minutes to consider what I offer.

Before digging in, you should know up front, this entire approach works from my Strategic Possession with Purpose Family of Indices; the same analysis offered up at the 2014 World Conference on Science and Soccer.

And the same analysis used to evalute professional team performances within Major League Soccer, the English Premier League, the Bundesliga, La Liga, World Cup 2014 and the UEFA Champions League.

The End State is to measure team performance – ignoring results (points in the league table) in order to track and trend (analyze) individual and team performance with the intent of driving towards improvement.

In statistical terms the relationship (correlation) of my analyses (the Composite PWP Index to Points in the League Table) without counting points is (R2) .86.

In other words 86% of the time my own Index reflects the outputs in the League Table without counting points.

AND…. 86% of the time the winning teams execute the steps within PWP better than the losing team!

With that said here’s what to do.

  1. Split the pitch into thirds and place one parent at the entry point into your own defending final third and one at the entry point into your opponent’s defending final third.
  2. Next, place two parents at the middle of the pitch.
  3. Then place one parent at or near the end line on your defending side of the pitch and then one parent at the same position on the opponent’s defending side of the pitch.
  4. Give each parent a clipboard and pen (waterproof if necessary) and have them begin to count and keep track of certain ‘team’ data points.
  5. The two parents in the center of the pitch are to count and document (all) passes attempted and passes completed for each team (throw-ins and free kicks included) across the entire pitch.  If you have four parents then have two track passes attempted and two track passes completed, one for each team.
  6. The two parents at the entry to the defending final third are to count and document passes attempted and completed (within and into) the defending final third for each team. This also includes all throw-ins, crosses, corners and free kicks that are not specific shots taken on goal.  If you have four parents/guardians then have one each track passes attempted and passes completed separately for each team.
  7. Finally, the two parents on the end lines are to count and document shots taken, shots on goal, and goals scored for each team.

At the end of the game you will have a complete data base (by volume and percentage) that gives you the information to identify your team’s possession percentage, passing accuracy, penetration per possession, ability to generate shots per penetrating possession, what percentage of those shots taken were on goal and what percentage of those shots on goal that scored goals (your team attacking).

And since you collected data on your opponent you will also have all the data on how well your opponent did in those same categories against you (your team defending).

Pretty much meaning you’ve just captured the ENTIRE bell curve of activities I use to measure team performance at the very highest level in the World.

With that data you can now determine, analyze, and document/chart/track ways to improve your attacking as well as defending team performances.  And as each game occurs you continue to build a data.

This information is then used to help you develop new training plans that look to help the team improve where weaknesses exist.

I do not recommend keeping track of individual performance unless you have enough parents and players who are mature enough to deal with individual weaknesses.

This approach should have application at any level of soccer – to include premier, as well as select, recreational, ODP or elsewhere.  As a matter of opinion, I’d offer the closer you are to a higher level of play the more important this approach becomes.

Outcomes from this approach give data to set targets for improvement and the ability to measure the success in that improvement.

In addition, this approach also reinforces that Youth Soccer Development is not all about winning, it’s about getting better while trying to do the things teams need to do in order to win.

If any team wishes to take on this challenge, as a youth club, anywhere in America, send me your data and I will give you one month of analysis that includes preparing products I develop in my analysis of professional football clubs.

I may even publish those products, as examples, for others to learn from in future articles.

And if you are located in the Portland or Beaverton area send me a note and I will make every effort to visit a training session, and or game, to help better explain this approach.

Finally, my general analysis may also include some recommendations on what training plans/programs may help focus your team on key areas to improve.

Bottom line at the bottom:

There is value in understanding and tracking the basic activities that occur in a game of soccer.  It not only helps the players understand their larger role in this team game it also helps the parents understand the greater detail and responsibility you have as a coach to help others get better as a ‘team’.

In case you missed it; this year four Head Coaches from teams who finished near or bottom on the CPWP Strategic Index have already been sacked in MLS:

CPWP Strategic Index Week 31 MLS

And last year five of the six worst teams in performing the PWP steps had the Head Coaches sacked!

End of Season 2013 MLS Coaching Changes

Pretty compelling evidence that teams who perform better have Head Coaches who last longer… if you want to have success as a Youth Head Coach then I strongly suggest you adopt the measurement methods and analysis associated with PWP; with or without using Parents/Guardians.

If there are every any questions please feel free to contact me through Linked-in or through twitter; my twitter is @chrisgluckpwp.

Best, Chris

COPYRIGHT, All Rights Reserved.  PWP – Trademark.


Redefining and Modernizing Total Shots Ratio

For many years Total Shots Ratio has plodded along as a good indicator of team shooting performance, not overall team performance, but shooting performance.

It’s a good enough indicator that its found its way into generic match reports for professional soccer teams and has good visibility on Opta – a well recognized soccer statistics company now owned by Perform Group.

But with all that publicity and ‘useability’ that doesn’t make it ‘right’!

Why do I say that?

Within a game of football there are always two teams playing against each other – so team performance statistics should not only take into account what the attacking team is doing – they should also take into account what the opponent is doing to the attacking team.

So what do I mean about modernizing TSR.  Most define TSR has simply the volume of shots one team takes versus the volume of shots another team takes.  That’s okay but the end state is excluded – the result – a goal scored.

So my new vision of TSR centers around the end state as well as the volume – in other words the equation for Attacking TSR (ATSR) now becomes Goals Scored/Shots Taken and then Defending TSR (DTSR) becomes the percentage of your opponent’s Goals Scored/Shots Taken.

Finally, in looking at how well Composite Possession with Purpose correlates to Points Earned in the League Table I would create Composite TSR (CTSR).

Before getting to the numbers – some history first:

I built Possession with Purpose using this philosophy and if you’ve been following my efforts for the last two years you know that my correlations to points earned in the league table are extremely high…  To date:

  • MLS 2014 = .86
  • Bundesliga = .92
  • English Premier League =.92
  • La Liga =.91
  • UEFA Champions League =.87

So let’s peel back the regular way TSR correlates to Points earned in last year’s MLS – when viewing the old way (Total Shots only as a percentage for both teams) the Correlation Coefficient “r” for the entire league was .32.

My new way of calculating CTSR with the End State of Goals scored has a correlation coefficient “r” of .75

Far higher…  now for some data.

Here’s the correlation of the my new TSR Family of Indices shows with respect to Points Earned in the League Table – the same analyses used with respect to CPWP above:

  • MLS 2014 ATSR .74) DTSR (-.54) CTSR (.75)
  • Bundesliga ATSR (.53) DTSR (-.41) CTSR (.68)
  • EPL ATSR (.86) DTSR (-.35) CTSR (.76)
  • La Liga ATSR (.88) DTSR (-.77) CTSR (.92)
  • UEFA ATSR (.64) DTSR (-.40) CTSR (.65)

Like CPWP the correlations vary – in four of five competitions the CTSR has a better correlation to points earned in the league table – while in one case (the EPL) ATSR has the best correlation.

So how do the numbers stack up for some individual teams when evaluating ATSR, DTSR, CTSR, and CPWP compared to those teams points earned throughout the season?

In other words what do the correlations look like (game to game) through the course of a season for sample teams within each of those Leagues?


In almost every sample TSR (now ATSR) has a lower, overall correlation to a teams’ points earned in the League Table than CTSR (Borussia Dortmund and Barcelona being the exception) – this pattern follows the same pattern seen with CPWP almost always having a higher correlation than APWP and Goal Differential almost always having a higher correlation than Goals Scored.

I’ve also taken the liberty of highlighting which Composite Index has the best correlation to points earned between all four categories – in every instance either CTSR or CPWP is higher than TSR.  But, as can be seen, sometimes CTSR is higher than CPWP…

What this proves is that there simply isn’t one Index that is far better or far worse than the other – it shows that different teams show different styles that yield better relationships to points earned in different ways —> meaning there is not only room for improvement in current TSR statistics but room for the inclusion of PWP principles within the Industry standard.

I would offer – however – that even when you create CTSR the backbone of that data can’t offer up supporting analyses on how a team attacks or defends.  It’s still only relevant to the volume of shots taken and goals scored.

And while the volume of shots on goal and goals scored appears to be a constant across most competitive leagues (average greater than 5 and 2 respectively for teams winning on a regular basis) the average of shots taken for winning teams is not as constant… (Expected Wins 4)  —> why I favor PWP over TSR – nothing personal – just my view…

In Closing:

I’m not sure I did a good job of comparing what I view as the old way to calculate TSR (the way that ignores the End State of Scoring a goal) and how an update to it can help tell a better story that actually correlates better to the complexities of soccer.

Best, Chris

COPYRIGHT, All Rights Reserved.  PWP – Trademark

Expected Wins Five – Europe

In my previous series on Expected Wins Four – probably more appropriately entitled “Expected Points” – I’d taken a look at how the general tendencies of four primary Leagues in Europe (England, Germany, Spain, as the UEFA Champions League) compare to Major League Soccer – Is European Football Really Higher Quality than Major League Soccer?

This time I’m focusing strictly on Europe and offering up how things stand in PWP with the season coming to a close soon.  But before digging some things to share about PWP to date:

A reminder – PWP is about two things:

  1. The End State in that the final Index comes as close as possible to the League Table without using points earned in any of the calculations, and
  2. Recognizing that soccer is a game that is played in a free flowing environment – picture two amoeba fighting against each other in a confined space…. There is attempted control by the Head Coach that includes tons of preparation to set the stage for ‘an approach’ to earn three points – and then there is the game itself where there is but one time out (halftime) – no namby pamby huddles or official stoppages of play between possessions.  Meaning these guys play a full-on, in your face (sometimes literally), non-stop, constantly thinking and reacting to the game that can literally see the ball go in any direction at any time… not purely random but close.

Given that, PWP attempts to tone down all that volatility and parse out general tendencies that fall within the bell curve of activities – it’s not perfect – but it’s bloody good… and yes – I have made a few mistakes along the way (if you don’t work you don’t make mistakes).  The latest has been a technical mistake – the relationship of CPWP to the League Table is not an R Squared number (Coefficient of Determination) it is an R number (Correlation Coefficient).

For the stats followers that may be an issue… but even with the Modernized TSR (read here) the CTSR “R” is still generally lower (team to team) and certainly lower (table to table) than CPWP – meaning there still remains room for both statistical analytical approaches in a gmae that is played across the world…

Also, my thanks to some great research by Rob Lowe, a mate with the same passion for footy, who has asked to collaborate with me in the future.  He has done some additional regression analysis on the data points of PWP with respect to goals scored and points earned.  I should point out that his results show that not all six of the data points in the PWP equation independently-directly relate to goals scored or points earned.  For me that is okay – and actually great news for a few reasons…

  1. Both of my two new statistics (Passes Completed in the Final Third per Passes Completed across the Entire Pitch – Step 3 of PWP) and (Shots Taken per Completed Pass within and into the Final Third – Step 5 of PWP) did statistically relate to Goals Scored and Points Earned (independently).  Meaning those new statistics are relevant – both within the context of PWP and outside the context of PWP.  It’s this statistical regression type information that should solidify these two new statistics in the world of soccer.
  2. For both Possession (Step 6 of PWP) and Passing Accuracy (Step 5 of PWP) – as you will see a bit later – those two derived data points were never supposed to directly (independently) relate to goals scored or points earned as a matter of course I have advocated for quite some time that they shouldn’t.  PWP was built with the intention that the six derived data points only needed to relate to each other in a stair step relationship recognizing that in every game a team needs to possess the ball, move the ball, penetrate the opponent’s final third, take shots based upon that penetration, put them on goal, and score goals – all while preventing the opponent from doing the same thing.
  3. Another view on the outcome that Rob has noted – it’s unreasonable to analyze a game of soccer without taking those activities into account.  Rob’s positive feedback was that both possession and passing accuracy act as a “smoothing agent” within the Index – I agree but with beginning to learn the nuance of writing an Academic Paper I would put it this way.
  4. Possession and Passing Accuracy stats have limitations when vewing overall regression analysis relative to goals scored and points earned – but those limitations actually give the overall analyst of soccer a much better understanding about the context of activities that occur when a team is performing better than another team.
  5. In addition, Passing Accuracy statistics provide a coach a great measurement tool for how well some players may develop and progress into higher levels of competition – to exclude data of this import really ignores some of the most fundamental training aspects a team needs to do in order to improve.
  6. Also, there is excessive volatility in the percentages associated with Shots on Goal versus Shots Taken and Goals Scored versus Shots on Goal – if I only look at those two things then evaluating a game is all about (pass-fail) – granted winning and losing is pass-fail.  But to develop a “winning culture” a grading system perhaps more appropriate is A-B-C-D-F – in other words there are levels of success above and beyond pass-fail – especially when you are a team that isn’t at the very top of the league.
  7. By having Possession and Passing Accuracy in the equation you get a much larger (explanatory) picture on the culture of success – and as things appear to take shape, the Index itself, gives better clarity to that level of success for teams that are mid-table as opposed to bottom dwellers or top performers…

Now for the grist in Europe – first up – England: 

Note that the first two diagrams (in each four diagram grouping) highlight where the highest quantity and highest quality occurs within each competition – after some growing pains (earlier Expected Wins measurements) all four competitions now see the teams that win having the highest averages, in all categories, for both quantity and quality… proving (for the most part) that more is better and more results in more…

Barcleys Premier League PWP Data PointsBarcleys Premier League PWP Derived Data PointsEnglish Premier League CPWP IndexEnglish Premier League CPWP Predictability Index

All told the correlation, at this time, remains very strong – note that the “R” has replaced the “R2” in my third and fourth diagrams.

If I remove Possession and Passing Accuracy from the CPWP Index – the R value drops to .78 – statistically reinforcing that the Index, itself, better represents the standings in the League Table by including Possession and Passing Accuracy data.  Proving yet, another way, that goals scored and shots taken simply do not provide adequate depth on what activities occur on a pitch relative to earning points in the League Table!  And if you’ve read Moderning TSR this doesn’t mean ATSR/DTSR or CTSR doesn’t have value – it does…

As things stand today Chelsea take the League and since Man City, Man United, and Arsenal round out the top four (different orders) in both CPWP and CPWP-PI I’d offer it’s those four that advance to the UEFA Champions League next year.  The bridesmaid looks to be a two horse race (Spurs supporters may argue that) between Liverpool and Southampton.

Note that Southampton edges Liverpool in CPWP but that Liverpool edges Southampton in CPWP-PI – meaning when excluding Goals Scored – Liverpool has better quality than Southampton – so for Liverpool it’s more about converting Shots on Goal to Goals Scored – while for Southampton it’s more about getting clean sheets and scoring at least one goal; at least in my view – others may see that differently?

In retracing the earlier discussion on the data within the six steps of PWP – as you can see in both the first and second Diagrams (for all competitions) the Exponential Curve (Diagram 1) and well as Power Curve (Diagram 2) the stair step relationship between the data – point to point – are incredibly high…  Even more intriguing is how close those “R2” numbers are for both winning, drawing, and losing… really driving home the point, in my view, just how small the margin of error is between winning, drawing, and losing.

For goals scored (for or against) we really are talking about 5 or 6 standard deviations to the right of the bell curve…


 Bundesliga PWP Data PointsBundesliga PWP Derived Data PointsGerman Premier League CPWP IndexGerman Premier League CPWP Predictability IndexPerhaps the most intriguing issue this year isn’t the FC Bayern story – it’s the lack of goal scoring in Borussia Dortmund – when viewing the CPWP Predictability Index clearly Dortmund is offering up all the necessary culture the team needs in order to succeed – with one exception – goal scoring…. wow!

Another surprise may be Wolfsburg I’d pick them, and Bayer Leverkusen to finish two-three in their League Table – both show pedigree in team performance both with and without considering goals scored…


La Liga Premier League PWP Data PointsLa Liga Premier League PWP Derived Data PointsSpanish Premier League CPWP IndexSpanish Premier League CPWP Predictability Index

Barcelona and Real Madrid are locked in for the top team battle – my edge goes to Barcelona.  I’d offer more here but I’m simply not up on the La Liga as much as I’d like to be…

UEFA Champions League:

UEFA Champions League PWP Data PointsUEFA Champions League PWP Derived Data PointsUEFA Champions League CPWP IndexUEFA Champions League CPWP Predictability Index

The top eight teams that advanced are identified above – given the general success of CPWP relative to the top eight I’d expect FC Bayern Munich, BArcelona, Real Madrid, and Juventus to advance to the semi-finals.

In Closing:

My first of at least 4-5 Academic Papers is soon to be published – my thanks to Terry Favero for helping me work through this new experience – his support, patience, and knowledge in navigating all the nuance associated with writing an Academic Paper has been superb!

All four European competitions show more gets you more – this was not the case for Major League Soccer last year:

Major League Soccer Expected Wins FourWinners Expected Wins PWP Data Relationships Four

When more gets you more in MLS then I sense MLS has reached the BIG TIME – until then I think it’s a great breeding ground for Head Coaches that simply can’t get a job with a soccer club that has huge pockets of money.

Put another way – and many may disagree… I think a Head Coach who really wants to challenge their intellectual grit against another Head Coach can have greater opportunity to do that in MLS than they can by Head Coaching most clubs in Europe.

Why?  For at least one reason – a Head Coach in MLS really has to do more with less…

Errata – the first MLS slide indicates 654 events – the correct number is 646 events…

Best, Chris

COPYRIGHT – All Rights Reserved.  PWP – Trademark

Gluck: What adds more value? Goal Scored or Goal Prevented?

With soccer statistical analysis growing daily, a longer headline might be: 

What do the tea leaves show about team performance measurements in Major League Soccer?  Does the goal prevented show greater value, relative to points earned, than the goal scored?

Even that’s a bit wordy though… maybe it’s…

Soccer Statistics:  What does “right” look like now?

If you read The Numbers Game: Why Everything You Know About Soccer Is Wrong – July 30, 2013 by Chris Anderson (Author), David Sally

There is a section called “On the Pitch, which explains how the game is a balance of strategies.  Preventing a goal is more important to earning points than scoring one, the game is about managing turnovers, and the game can be controlled by both tiki taka as well as keeping the ball out of play longer than the average team does.”  Sourced from this article written here https://www.forbes.com/sites/zachslaton/2013/07/30/everything-we-know-about-soccer-is-wrong/#686a7ab47831

My analysis shows:

Goals scored have more value (relative to points earned) than goals prevented.

Furthermore, I don’t just see the game as a balance of strategies, I see it as a balance of team statistics driven by team operations, strategies and tactics.

In the last four years the balance between how well a team attacks, versus how well the opponent attacks against that team, has more value (relative to points earned) than simply goals scored or prevented.

Finally, what shows as a valuable (balanced) team performance measurement for one team does not hold true as a valuable (balanced) team performance measurement for all teams; either home or away.

Composite Possession with Purpose (CPWP) Indices:

The CPWP index is generated by subtracting team attacking statistics (APWP) from opponent team attacking statistics (DPWP).  This is my way of ensuring I capture a teams’ balanced performance (with and without the ball).

Intimate details on my PWP formulas can be seen in my academic paper published with the International Research Science and Soccer II, published in 2016.   “Possession with Purpose: A Data-Driven Approach to Evaluating Team Effectiveness in Attack and Defense C. Gluck and T. Favero”.

Breaking News:  An abstract on the use of Possession with Purpose Index as a tool for predicting team standings in Professional Soccer has just been approved for presentation (as a poster) at the World Conference on Science and Soccer – Rennes, France 2017.

General information and other relevant articles published, stemming from my research include:

Over the last four years I’ve measured these leagues/competitions using PWP analysis:

  • Major League Soccer 2013, 2014, 2015, 2016,
  • English Premier League 2014,
  • Bundesliga 2014,
  • La Liga 2014,
  • UEFA Champions League 2014,
  • Men’s World Cup 2014, and
  • Women’s World Cup 2015.
  • The lowest correlation this index has had, to the league table, was in MLS 2016 (.75).  The highest correlation this index has was for the EPL and La Liga of 2014 (.94).
  • I’d put the lower correlation in MLS 2016 down to increased parity across the league, but I’ll leave how my index can be used to measure parity, in a league, for another day.

In this analysis I’ve evaluated 18 MLS teams that have played 34 (17 home and away) games in each of the last three years (2014, 2015, and 2016).  This equates to 1003 games of data or 2006 total game events for home and away teams.

My analysis excludes New York City FC, Orlando City FC, Chivas USA, Minnesota United FC, and Atlanta United FC as these teams have not played 34 games in each of the last three years.

Data will be presented in three separate categories, total games, away games, and home games.

In addition to evaluating team performance using my standard PWP Indices I have added three additional families of indices to my analyses.  They are:

  • Composite Possession with Purpose Indices Enhanced with Crossing Accuracy (CPWP-CR),
  • Composite Possession with Purpose Indices Enhanced with Clearances (CPWP-CL),
  • Composite Possession with Purpose Indices Enhanced with Crossing Accuracy and Clearances (CPWP- CR/CL), and
  • My benchmark for passing the common sense ‘giggle check’ is, as always, Goal Differential.

Data arrays:

Total Games

Total game observations for consideration:

In every instance goal differential had the strongest correlation to points earned in the league table.

In every instance a CPWP index had the second and third highest correlation to points earned in the league table.

Best, in order of frequency for correlation to points earned, is provided below:

  • Goal Differential – 18 times 1st *benchmark
  • CPWP Index – 14 times 2nd or tied for 2nd
  • CPWP-CL Index – 6 times 2nd or tied for 2nd
  • CPWP-CR Index – 3 times 2nd or tied for 2nd
  • CPWP-CRCL Index – 3 times 2nd or tied for 2nd

Teams not fitting the norm (PWP Index solely being 2nd best) were: Colorado Rapids, Columbus Crew, LA Galaxy, Montreal Impact, New England Revolution, Portland Timbers, Real Salt Lake, San Jose Earthquakes, Sporting Kansas City, Seattle Sounders, and Toronto FC.

When viewing the DPWP, seven teams showed stronger correlations to points earned (preventing the opponent from scoring goals).  They were:  Chicago Fire, Colorado Rapids, Houston Dynamo, Montreal Impact, New York Red Bulls, Philadelphia Union, and Sporting Kansas City.

Meaning 11 teams showed the APWP indices as having higher correlation to points earned; i.e. scoring goals was more important than preventing goals scored.

Away Games

Away game observations for consideration:

In every instance, but one, goal differential had the strongest correlation to points earned in the league table.  The outlying team, where goal differential was not the best correlation to points earned, was Colorado Rapids.

  • I think this exception is worth noting.
  • For me, goal differentials’ correlation to points earned has been THE benchmark in determining whether or not my team performance indices ‘make sense’.
  • Exceeding the benchmark, even once, confirms for me as a soccer analyst, that my approach adds value when looking for ways to help explain the game better.

In every instance a CPWP index had the second and third highest correlation to points earned in the league table.

Best, in order of frequency for correlation to points earned, is provided below:

  • Goal Differential – 17 times 1st *benchmark
  • CPWP Index – 9 times 2nd
  • CPWP-CL Index – 7 times 2nd
  • CPWP-CR Index – 2 times 2nd or tied for 2nd
  • CPWP-CRCL Index – 1 time 2nd or tied for 2nd

Teams not fitting the norm (PWP Index solely being 2nd best) were: Colorado Rapids, Columbus Crew, Chicago Fire, FC Dallas, Houston Dynamo, Montreal Impact, New England Revolution, Portland Timbers, Real Salt Lake, San Jose Earthquakes, and Toronto FC.

When viewing the DPWP indices, ten teams showed stronger correlations to points earned (preventing the opponent from scoring goals).  They were: Columbus Crew, Chicago Fire, Colorado Rapids, FC Dallas, Houston Dynamo, Montreal Impact, New York Red Bulls, Portland Timbers, Philadelphia Union, and Toronto FC.

Meaning ten teams showed the APWP indices as having a higher correlation to points earned; i.e. scoring goals was just as important as preventing goals scored.

Home Games

Home game observations for consideration:

In every instance goal differential had the strongest correlation to points earned in the league table.

In every instance a CPWP index had the second and third highest correlation to points earned in the league table.

Best, in order of frequency for correlation to points earned, is provided below::

  • Goal Differential – 17 times 1st *benchmark
  • CPWP Index – 10 times 2nd
  • CPWP-CL Index – 7 times 2nd
  • CPWP-CR Index – 2 times 2nd or tied for 2nd
  • CPWP-CRCL Index – 1 time 2nd or tied for 2nd

Teams not fitting the norm (PWP Index solely being 2nd best) were: Colorado Rapids, Columbus Crew, Chicago Fire, FC Dallas, Houston Dynamo, Montreal Impact, New England Revolution, Portland Timbers, Real Salt Lake, San Jose Earthquakes, and Toronto FC.

When viewing the DPWP indices six teams showed stronger correlations to points earned (preventing the opponent from scoring goals).  They were: Chicago Fire, Colorado Rapids, Montreal Impact, Philadelphia Union, Sporting Kansas City, and Vancouver Whitecaps.

Meaning 12 teams showed the APWP indices as having a higher correlation to points earned; i.e. scoring goals was more important than preventing goals scored.


The CPWP indices are not perfect but they do show very strong, consistent, correlation to points earned in the league table.

In every instance the balance of a teams’ success in possession, passing accuracy, penetration, shot creation, shots taken, shots on goal, and goals scored AND preventing the opponent from doing the same, exceeds either APWP (scoring goals) or DPWP (preventing goals scored).

The same CPWP index was not the best CPWP index for every team relative to points earned in the league table.

Teams playing in away games had different CPWP indices (showing greater correlations to points earned) than games played at home.

The DPWP indices did not, consistently, have a greater correlation to points earned than the APWP indices.

Colorado, Columbus, Montreal, New England, Portland, Real Salt Lake, San Jose, and Toronto consistently showed CPWP-CR and CL indices had greater correlation than the standard CPWP index.

Correlation of all indices, to points earned, differed between home and away games.

Final correlations to points earned for all teams measured (combined) the last three years in MLS were:

  • Goal differential =  .87
  • APWP   = .53 // DPWP = -.51 // CPWP = .74
  • APWP-CR = .52 // DPWP-CR = -.50 // CPWP-CR = .72
  • APWP-CL = .49 // DPWP-CL = -.46 // CPWP-CL = .66
  • APWP-CR/CL = .49 // DPWP-CR/CL = -.47 // CPWP-CR/CL = .66
  • Goals Scored = .63


The balance of attacking, versus stopping the attack of the opponent, has more value in measuring team performance (relative to points earned in the league table) than goals scored or prevented.

Goals scored, on average, (APWP) have more value (relative to points earned in the league table) than goals prevented (DPWP).

The correlation of team measurements, relative to points earned, varies from team to team, both home and away.

Therefore the value of individual player statistics (used to create those team statistics) varies from player to player, both home and away..

For example:  The CPWP-CR and CPWP-CL indices showed 2nd best for correlation to points earned for Colorado, Columbus, Chicago, FC Dallas, Houston, Montreal. New England, Portland, Real Salt Lake, San Jose, and Toronto (in away or home games) over the last three years:

Therefore, the players who play on those teams should have their individual statistics (for crosses and/or clearances) weighted differently than players who play on the other teams; because the value of their successful crosses/clearances had greater weight relative to those teams earning points.

Last but not least, what the other leagues/competitions offered after one season/competition:

  • EPL // APWP = .92 // DPWP = -.88 // CPWP =.94
  • La Liga // APWP = .93 // DPWP = -.90 // CPWP = .94
  • UEFA Champions League // APWP = .74 // DPWP = -.66 // CPWP = .81
  • Bundesliga // APWP =.89 // DPWP = -.84 // CPWP = .93
  • Men’s World Cup 2014 // APWP = .58 // DPWP =-.77 // CPWP = .76
  • Women’s World Cup 2015 // APWP = .63 // DPWP = -.77 // CPWP = .76

Both the Men’s and Women’s World Cup competitions saw the value of the goal prevented greater than the goal scored.  In all other instances the balance between the two showed greater correlation.

Anderson and Sally weren’t wrong at all; it’s more about what right looks like depending on what league/competition is being evaluated.

Best, Chris

You can follow me on twitter @chrisgluckpwp

COPYRIGHT: All Rights Reserved.  PWP Trademark

NOTE:  All the data used in my analysis is publicly available with the exception of the Women’s World Cup 2015 data; my thanks to OPTA for providing me that data last year.

Passing – An oddity in how it’s measured in Soccer (Part I)

In my passion to better understand how soccer is statistically tracked I’ve come across what I would call is an oddity about the general characterization of “passing” in the world’s greatest sport.

Here’s the deal – go to Squawka.com, whoscored.com, reference the “Stats” tab on mlssoccer.com, or review Golazo information, and you’ll notice they all provide passing information.

My intent is not to dig deep into passing details – not yet, anyway. We’ll get there in another article to follow after I get permission from OPTA to reference their F-24 definitions within their Appendices. For now here’s a simple question I have as a statistical person working on soccer analysis.

What is the number of passes I should use for teams and which denominator is the right number for total passes by both teams to help determine possession percentages?

In the MLS Chalkboard you can clearly see and count passes – here’s an example from a game this past week.

An important filter to note – the major term ‘Distribution’ is not to be clicked in creating this filter – all that is clicked is ‘successful pass and unsuccessful pass’; note also that some details are provided on the types of passes – we’ll get there in another article.

Bottom line is that the MLS Chalkboard identifies 309 successful passes and 125 unsuccessful passes for a total of 434 passes attempted.

On the MLS Stat sheet – one tab over but linked here the number of passes for Chivas = 369; that number doesn’t match the Chalkboard in either total, unsuccessful or successful.

For Golazo, for that same game here’s their total: 369 Passes total with 75% accuracy meaning the total successful passes was 277 and unsuccessful passes totaled 92. Not the same either.

For Squawka.com here’s their total:
Successful = 270 /// headers (8), throughballs (2), passes (239), long balls (21) and supposedly crosses (0)
Unsuccessful = 86 /// passes (52), headers (14), long balls (20), no unsuccessful crosses or throughballs logged here?! Yet the MLS chalkboard indicates 26 unsuccessful crosses!
All told that is 356 passes; those figures don’t match the other data sources.

For whoscored.com here’s their total: Short ball = 323, Long ball = 52, Through ball = 2, Cross = 35, for a total of 412 passes – again that figure doesn’t match the other data sources.

So what’s the right total? Here’s a table to compare showing the source of data and the total passes submitted for statistical folks like us to leverage in our analysis.

MLS Chalkboard 434
MLS Statistics 369
Golazo (same as MLS Stats) 369
Squawka 356
Whoscored 412


I have no idea what ‘right’ looks like here but here’s what I’ve done to work through this issue.

I chose one source, the MLS Chalkboard, to gather and analyze statistics on passing and possession and all other things available from that data source – where other information is not offered there I reference the MLS Stats tab and Formation tab.

Why did I choose the Chalkboard? Because it provides additional detail that shows more clarity on all the other types of passes that occur in a game.

For example; if you scroll down on the Chalkboard link and select Set-Pieces you’ll see that Throw-ins are included in the successful passing totals – by definition a Throw-in is a pass as it travels from one player to another.

So my recommendation, if interested, is to track Major League Soccer statistics using the MLS Chalkboard first – it’s harder but seems to be the best one at this time.

I’m not sure why the MLS Chalkboard, Golazo, Whoscored and Squawka all had different team passing statistics; given that it is likely they all have different individual player statistics as well… but in asking a representative from OPTA about that – their response was provided below:

“The difference between the different websites could be down to a few things. Either they take different levels of data from us, or they take the same feed but only use a chosen set of information from each feed to display their own take on each game.”

By the way – I did try to find a reasonable definition of what a pass is defined as for soccer; here’s some of that information before final thoughts… note: they are all different and Wikipedia proves, by its definition, why it’s a pretty useless source for information… for them a pass in soccer must travel on the ground – no kidding – here’s their definition up front:

“Passing the ball is a key part of association football. The purpose of passing is to keep possession of the ball by maneuvering it on the ground between different players and to advance it up the playing field.”

Other definitions get pretty detailed – it is what it is apparently – complicated…

Passing Definition: About.com World Soccer.

When the player in possession kicks the ball to a teammate. Passes can be long or short but must remain within the field of play.

Soccer Dictionary: Note there are numerous definitions provided in this link so offering up a specific link is troublesome so I will cut and paste those definitions below:

Cross, diagonal: Usually applied in the attacking third of the field to a pass played well infield from the touch-line and diagonally forward from right to left or left to right.
Cross, far-post: A pass made to the area, usually beyond the post, farthest from the point from which the ball was kicked.
Cross, flank (wing): A pass made from near to a touch-line, in the attacking third of the field, to an area near to the goal.
Cross, headers: 64% of all goals from crosses are scored by headers.
Cross, mid-goal: A pass made to the area directly in front of the goal and some six to twelve yards from the goal-line.
Pass, chip: A pass made by a stabbing action of the kicking foot to the bottom part of the ball to achieve a steep trajectory and vicious back spin on the ball.
Pass, flick: A pass made by an outward rotation of the kicking foot, contact on the ball being made with the outside of the foot.
Pass, half-volley: A pass made by the kicking foot making contact with the ball at the moment the ball touches the ground.
Pass, push: A pass made with the inside of the kicking foot.
Pass, sweve: A pass made by imparting spin to the ball, thereby causing it to swerve from either right to left or left to right. Which way the ball swerves depends on whether contact with the ball is made with the outside or the inside of the kicking foot.
Pass, volley: A pass made before the ball touches the ground.
Passing: When a player kicks the ball to his teammate.
Through pass: A pass sent to a teammate to get him/her the ball behind his defender; used to penetrate a line of defenders. This pass has to be made with perfect pace and accuracy so it beats the defense and allows attackers to collect it before the goalkeeper.

Ducksters.com offers up a Glossary and Terms for Soccer; here’s what they define a pass as being… this one is geared more towards teaching players about various types of passes they will need good skill in order to execute them.

Direct Passes – The first type of soccer pass you learn is the direct pass. This is when you pass the ball directly to a teammate. A strong firm pass directly at the player’s feet is best. You want to make it easy for your teammate to handle, but not take too long to get there.

Passes to Open Spaces – Passing into space is an important concept in making passes in soccer. This is when you pass the ball to an area where a teammate is running. You must anticipate both the direction and speed of your teammate as well as the opponents. Good communication and practice is key to good passes into space.

Wall Passes (One-Twos) – Now we are getting into more complex passing. You can think of a wall pass as bouncing a ball off of a wall to yourself. Except in this case the wall is a teammate. In wall pass you pass the ball to a teammate who immediately passes the ball back to you into open space. This helps to keep the defense off balance. This is a difficult maneuver and takes a lot of practice, but the results will make it worth the effort.

Long Passes – Sometimes you will have the opportunity to get the ball up the field quickly to an open teammate. A long pass can be used. On a long pass you kick the ball differently than with other shorter passes. You use an instep kick where you kick the soccer ball with your instep or on the shoelaces. To do this you plant your non-kicking foot a few inches from the ball. Then, with your kicking leg swinging back and bending at the knee, snap your foot forward with your toe pointed down and kick the ball with the instep of your foot.

Backward Pass – Sometimes you will need to pass the ball backward. This is done all the time in professional soccer. There is nothing wrong with passing the ball back in order to get your offense set up and maintain control of the ball.

Now that’s probably not ‘every’ definition available but they pretty much say the same thing apart from ‘on-the-ground’ by Wikipedia – a pass is a transfer of the ball from one player to another…

In closing…

As noted earlier – I’m not really sure what right looks like but I remain convinced that all these organizations are well-intentioned in offering up free statistics for others to use, be it for analysis, fantasy league or simply to check it out.

In my own effort to develop more comprehensive measurements and indicators a standardized source of data for the MLS would be beneficial – if the intent for MLS is to endorse OPTA then there remains a conflict as Golazo clearly does not use the same data filters as the Chalkboard.

My vote, is and will remain, keep the Chalkboard and then, MLS, consider ways, as OPTA (Perform Group) is now, to improve it for more beneficial analysis.

Here is Part II  – where I peel back a wee bit more – consider these phrases, successful crosses, launches, key passes, through-balls, throw-ins and more, as ASA continues its venture into Soccer Analysis in America.

Here’s a few paraphrased thoughts from other folks who offer up articles on ASA about this issue on passing statistics:

Jared Young – The massive difference in pass data between sites is troubling and disturbing; I’ve been primarily using whoscored.com and golazo for my numbers so I may have to explore other options.

Cris Pannullo – Major League Soccer should take an initiative and define what pass means in their league; it is surprising that they haven’t given how popular things like fantasy sports are; people eat statistics up in this country.

All the best, Chris

You can follow me on twitter @chrisgluckpwp

Separating winners from losers in Major League Soccer…

We are past the halfway point in Major League Soccer this year and if you recall from this previous article I promised I would revisit my Expected Wins analysis again at about this stage.

To continue to chart the progress of PWP, to include the data points behind the calculations, I am offering up some diagrams on what the data looks like after:

  1. The 92 game mark of the MLS Regular Season (184 events).
  2. The 183 game mark of the MLS Regular Season (366 events).
  3. The same data points for World Cup 2014  (128 events).

For background details on Possession with Purpose click this here.

To begin…

A reminder of how things looked after 184 Events (92 Games)…


Trends indicated that winning teams passed the ball more, completed more passes, penetrated the final third slightly less but completed more of their pass attempts in the final third.

For shooting; winning teams shot slightly less by volume but were far more successful in putting those shots on goal and scoring goals.

For details you can enlarge the diagram and look for your specific area of interest.

As for how the trends show after 366 Events (183 Games)…


Winning teams now average less pass attempts and complete slightly fewer passes.

There is a marked decrease in pass attempts into the opposing final third and slightly fewer passes completed within the final third.

In other words – teams are counter-attacking more and playing a style more related to ‘mistake driven’, counter-attacking, as opposed to positive attacking leading into the opponents final third.

As for shooting; winning team are now taking more shots, with more of those shots being on goal and more of those resulting in a goal scored.

In my opinion what is happening is teams are taking advantage of poor passing accuracy to generate and create turnovers .

In turn those turnovers are generating cleaner and clearer shots given opponent poor positional play on the transition.

My expectation is that more teams will now begin to focus on bringing in newer players that have better recovery skills and can defend better.

In contrast, here’s how these same data points look after completion of the World Cup of 2014… there is a difference…



Winning teams average more passes attempted and far more completions than losing teams.

In addition winning teams penetrated far more frequently than losing teams, and that increase in penetration also translated to an increase in passes completed within the final third.

With respect to shooting; winning teams shot more, put more shots on goal, and scored far more goals.

Clearly what we see here is that quality in player skill levels also translated to an increase in quantity.

That should become even more apparent in looking at the PWP outputs for MLS and World Cup Teams…

Here they are for MLS at the 184 Events point this year:



A quick review of the data outputs shows winning teams averaged 51% possession and are 2% points better in overall passing accuracy.

That passing accuracy advantage also carried into the final third but when taking shots losing teams averaged more shots taken, per penetration, than winning teams.

Bottom line here is that winning teams had those fewer shots taken generate more shots on goal and more goals scored than losing teams.

After the 366 Event point this is how those same outputs look…


Like the indicators, in the PWP Data points, the percentages here are beginning to reflect the counter-attacking style of football taking over as the norm.

Winning teams now, on average, possess the ball less than their opponents… wow… mistake driven football is taking hold across the MLS.

As for Passing accuracy within and outside the final third…

Winning teams continue to be better in passing – and that level of accuracy is driving a large increase in shots taken, per penetration, by winning teams compared to losing teams (almost 2% different).

That is a marked difference (4% swing), from earlier, where losing teams shot more frequently, per penetration, than winning teams.

In addition that increase in shots taken, per penetration, also results in more shots on goal, per shot taken, and more goals scored, per shot on goal.

The margin between winning teams, and losing teams, for goals scored versus shots on goal, at the 184 Event point versus 366 Event point, still remains > 29%.

 So how about teams in the World Cup???


Like earlier, winning teams not only passed the ball more frequently they possessed the ball more, by 5% (52.56% to 47.89%).

So contrary to what others might think – tika-taka is not dead, it’s just been transformed a wee bit…

With respect to passing accuracy…

I’m not sure it can be any more clear than this – winning teams averaged 82.40% and losing teams averaged 80.46%.

What makes these outputs different from MLS is that the level of execution is far higher in passing accuracy; by as much as 6%.

To put that in perspective; if  a team looks to attempt 500 passes in MLS that equals 380 passes completed – compared to 412 passes completed by World Cup teams; clearly the level of execution is much higher.

That difference of 32 passes completed can have a huge impact when penetrating and creating opportunities within the final third.

What makes it even tougher is that the quality of defenders is significantly higher at the World Cup level as well.

With respect to penetration and creation within the final third…

World Cup winning teams averaged 2% greater penetration per possession than winning teams in the MLS.

By contrast World Cup winning teams generated fewer shots taken per penetration than those in the MLS.

Does this speak to better defending?  I think so…

What I think is happening is that quality gets the team into the box, but then the quality of the defenders and goal keepers, in that confined space, is taking over.

This should be evident, even more so, when seeing that winning teams in the World Cup also put fewer shots on goal per shot taken than winning teams in MLS.

And that also translated to goals scored for winning teams in the World Cup also scored fewer goals scored per shot on goal…

In closing…

All told, winning teams in the World Cup displayed slightly different (average percentages) than winning teams in MLS with one exception – passing accuracy.

And given the importance of the tournament it’s no wonder…

Without having the data, yet, I’d expect that the better teams in the EPL, Bundesliga, and other top European Leagues that difference in passing accuracy would remain.

As for the difference in possession (winning teams clearly possessing the ball more than losing teams) I’m not sure – mistake driven football, if memory serves is an approach Chelsea have used in the past…

I’d imagine it’s a pendulum type effect – as more teams work towards mistake driven football more teams will strengthen their ability to recover and open the game up a bit with direct attack to force the opponent from pressing so high.

I’ll be looking for additional trends as the year progresses to see if direct play increases – perhaps a good indicator of that might be even fewer penetrations and more crossing?

With respect to statistical relevance of the data and the outputs generated…

In every case the relationships created, be them Exponential or 4th Order Polynomial all had correlations that exceeded .95.

In other words the variations are minimal and should really reinforce just how tight the difference is between winning and losing in a game of soccer…

Best, Chris

Re-tweets appreciated…

COPYRIGHT, All Rights Reserved – PWP, Trademark


CPWP Predictability versus MLS Results (Week 18 and 19)

Having been away on business last week I was unable to publish last weeks predictability versus reality results; in catching up here’s how things went in Week 18 and Week 19 versus the Composite Possession with Purpose Predictability Index (CPWP PI); excluding the Chivas USA v DC United match later this evening.

To begin here’s the CPWP Predictability Index for teams at Home, followed by, the CPWP PI for teams playing Away for Week 18/19…





Before digging into the results versus predictability note the significant difference in team performance at Home versus Away.

Pretty compelling evidence to reinforce what most believe, the home team usually does better… but… some teams can and will perform very strong on the road.

In reviewing the results… 

If you want the game by game comparison for Week 18 & Week 19 it can be found at the end of this article.

For now know that the CPWP PI accurately reflected five of the eight wins (draws excluded) for Week 18.

In addition, the CPWP PI accurately reflected seven out of seven wins (draws excluded) for Week 19.

If keeping track (after four weeks of leveraging the CPWP PI) it has been accurate in predicting 20 of 27 games (excluding draws); that’s a 74% success rate.

In Closing…

In general, the home team has won 74 games at home; while the away team has won 47 games on the road – the home team average percentage chance of winning based purely on results is 62%.

It would appear that the use of the CPWP, as a predictability model, gives someone a 12% better chance of predicting the outcome of a game then by purely picking the home team to beat the away team…

Perhaps others have a different view?

Best, Chris


Week 18:

San Jose, at home, lost to DC United 1 – 2.  San Jose, at home, has a .0368 CPWP PI while DC United, on the road, has a -.2174 – the CPWP PI was not accurate.

New York, at home, won against Columbus 4-1.  New York, at home, has a .1184 while Columbus, on the road, has a .1047 – the CPWP PI was accurate.

Toronto, at home, won against Houston 4-2.  Toronto, at home has a .0886 while Houston, on the road, is -.1706 – the CPWP PI was accurate.

Philadelphia, at home, drew with Colorado 3-3.  CPWP PI does not measure for draws.

Montreal, at home, lost to Sporting KC 1-2.  Montreal, at home, is -.0170 while Sporting KC, on the road, is .1112 – the CPWP PI was accurate.

New England, at home, lost to Chicago 0-1.  New England, at home, is .2516 while Chicago, on the road, is -.2241 – the CPWP PI was not accurate.

Vancouver, at home, lost to Chivas 1-3.  Vancouver, at home, is .1912 while Chivas, on the road, is -.1827 – the CPWP PI was not accurate.

LA Galaxy, at home, won against Real Salt Lake 1-0.  LA, at home, is .0476 while RSL, on the road, is -.1278 – the CPWP PI was accurate.

Seattle, at home, won against Portland 2-0.  Seattle, at home, is .2669 while Portland, on the road, is .0486 – the CPWP PI was accurate.

Week 19 (with the Chivas versus DC United game left to play):

Philadelphia, at home, defeated New York 3-1; Philadelphia, at home, is -.0107 while the New York, on the road, is -.0711 – the CPWP PI was accurate.

Columbus lost, at home, to Sporting KC 1-2;  Columbus, at home, is.0797 while the Sporting KC, on the road, is .1112 – the CPWP PI was accurate.

Toronto, at home, drew with Vancouver 1-1. (not measured).

LA, at home, beat New England 5-1; LA, at home, is .0476 while the New England, on the road is -.0565 – the CPWP PI was accurate.

Portland, at home, beat Colorado 2-1.; Portland, at home, is .0271 while Colorado, on the road, is -.0452 – the CPWP PI was accurate.

Sporting KC, at home, beat LA 2-1. Sporting, at home, is .3362 while LA, on the road, is .1393 – the CPWP PI was accurate.

New York at home, drew with San Jose 1-1. (not measured).

Columbus, at home, beat Montreal 2-1; Columbus, at home, is .0797 while Montreal, on the road, is -.0950 – the CPWP was accurate.

Chicago, at home, drew with Philadelphia 1-1. (not measured).

Dallas, at home, beat New England 2-0; Dallas, at home, is .0599 while New England, on the road, is -.0565 – the CPWP was accurate.

Houston, at home, drew with Toronto 2-2.  (not measured).

Real Salt Lake, at home, drew with Vancouver 1-1. (not measured).


FIFA World Rankings – time for a change?

Although this article was written about 18 months ago – I still think it retains relevance; for two reasons:

  1. FIFA is embroiled in a huge scandle, and
  2. People seem to keep reading it almost 2 years after the fact.

As such here’s a redux on the primary headline with some added juice about the corrupt behavior of the organization, to date, and how the rankings REALLY do need  a re-look in how they are calculated!

I don’t claim that my suggested new way is THE way, but I do think it represents a considerably more open and objective ranking approach then how it’s currently done.

Finally, as with my latest on Moneyball 2 – I highly recommend you get a cup or pint of your favorite beverage before digging in.

To begin – here’s what I offered previously; later on I’ll add some additional thoughts not touched on in the original article; thanks in advance for your patience:

In order to offer up my comments/questions for consideration it’s appropriate for me to include the FIFA World Rankings as of 20 months ago and then the link on how it’s determined.

First the link and the diagram below showing the Top 30 as of June, 2014.

June 2014 FIFA Rankings - Coca-Cola Sponsored

June 2014 FIFA Rankings – Coca-Cola Sponsored

Now, here’s how it’s calculated

What follows is a direct lift from the link provided above:  FIFA explanations are offered in “bold” while my questions/comments will be offered in ‘italics’.


The basic logic of these calculations is simple: any team that does well in world football wins points which enable it to climb the world ranking. 

Well I’m not so sure it’s simple but it does provide what it says it does – a listing from best to worst organized by ‘points earned’.

A team’s total number of points over a four-year period is determined by adding: 

The average number of points gained from matches during the past 12 months; and the average number of points gained from matches older than 12 months (depreciates yearly).

  • Maybe it’s just me but I don’t see the relevance of using four years worth of history in ranking current teams.
  • My own personal view is that the last two years (which ensures including the lead up to the World Cup) has more relevance given the nature of players that appear and disappear, from year to year, on National Soccer teams.
  • I wonder what the bi-yearly turnover rate in player personnel is compared to the quad-yearly (is that a word?) turnover rate in player personnel?
  • And what about changes in Head Coaches; shouldn’t that impact a National Team Ranking? 
  • Most, I think, would agree that a change in Head Coach will not only drive a change in player selection it will also drive a change in how the team strategically and tactically attacks and defends.
  • When that change occurs is it really the same team?
  • In considering the four year life-span of the points I’m not sure I see the relevance of how a team performed three years ago, with perhaps a 50% change in player personnel, has any bearing on how a team might perform in the current year.
  • The same can be said for a team coached by someone else 3-4 years ago versus in the last year or so…
  • Perhaps? a team should be ‘reduxed’ when a new Head Coach arrives on scene?   Might using just two years worth of data help ‘quantify’ that redux?
  • Or, in other words previous performance is excluded and a new clean sheet is started?
  • Perhaps? a team should be ‘reduxed’ when over 50% of the player personnel change? 
  • In other words previous performance with a team that has over 50% of new players means a new clean sheet is started?
  • Maybe this keeps the FIFA World Cup rankings more up to the ‘now’ as opposed to the ‘then’?

Calculation of points for a single match:

The number of points that can be won in a match depends on the following factors:

Was the match won or drawn? (M)

How important was the match (ranging from a friendly match to a FIFA World Cup™ match)? (I)

How strong was the opposing team in terms of ranking position and the confederation to which they belong? (T and C)

  • Results are qualitative based not quantitative based; if the FIFA Rankings are intended to be used to “quantify”/”deem” which teams are better or worse, in overall performance, relative to placement in future tournaments, is it better to rank those teams using a quantitative or qualitative analyses?  
  • I’d offer it’s better to use a quantitative analytical approach.
  • Friendlies have absolutely no bearing on whether or not a team is good or bad – why? 
  • Because they are experiments that Head Coaches use to evaluate players for when it really matters; to attach a value to a friendly, that exceeds the ‘intent’ of the Friendly, and (brutal facts) violates all the common sense logic of a statistical based ranking system.
  • How is the strength of one Confederation compared to another? 
  • The percentages are provided further below but no additional explanation is offered to go with that…
  • If teams only meet in the World Cup, outside of Friendlies, from different Confederations, what is the value of one FIFA World Ranking System; isn’t it simply more relevant to create a FIFA World Ranking after all the Confederations have completed their elimination tournaments?
  • And then, perhaps, that listing is leveraged when the seeded teams from each Confederation are matched up to the other Confederations for the World Cup?
  • If a quantitative statistical approach were used it would be easier as you’d be comparing ‘apples to apples’…
  • And if Friendlies are not included in the analyses, then the only time the real Rank has value is right before and right after the World Cup.
  • And after the World Cup it could be used to seed teams for Confederation tournaments; or is that devolving the FIFA World Ranking of too much influence?
  • Will the hog butcher itself?

These factors are brought together in the following formula to ascertain the total number of points (P).

(P = M x I x T x C)    The following criteria apply to the calculation of points:

M: Points for match result

  • Teams gain 3 points for a victory, 1 point for a draw and 0 points for a defeat. In a penalty shoot-out, the winning team gains 2 points and the losing team gains 1 point.
    • Again, when in a Friendly, this places a value of ‘worth’ in winning, when in fact there is no value in winning a Friendly.
    • The intent of a Friendly is for the Head Coaches to see how their players perform and the players get a feel for what it’s like to work in that coaches system with other teammates.
    • If FIFA has the approach of awarding Ranking Points for teams who win in Penalty Shoot-outs than why have draws as a part of the game at all?
    • In a knock-out competition draws can’t happen; so why can they happen in regular competition?
    • Why not just have every game that ends in a Draw result in a Penalty Shoot-out where the winner gets 2 points in the League Table and the loser gets one point in the League Table?
    • Might this approach also help players better train for crucial PK competitions in the World Cup?
    • Put another way; is the “consistency of purpose” missing when it comes to FIFA and how games are ended?

I: Importance of match

  • Friendly match (including small competitions): I = 1.0
  • FIFA World Cup™ qualifier or confederation-level qualifier: I = 2.5
  • Confederation-level final competition or FIFA Confederations Cup: I = 3.0
  • FIFA World Cup™ final competition: I = 4.0
    • What is a “small competition”?
    • Why is the value of a FIFA World Cup match any different than the value of any other specific competition that is not a Friendly?
    • All of those other competition types (excluding Friendlies) can and do see players rotating in and out of National Team squads; so the teams are not the same teams all the time.
    • In addition, there are numerous changes in Head Coaches between World Cup events; therefore does it seem reasonable that all the Competition levels have different values/levels of importance?

T: Strength of opposing team

  • The strength of the opponents is based on the formula: 200 – the ranking position of the opponents.
    As an exception to this formula, the team at the top of the ranking is always assigned the value 200 and the teams ranked 150th and below are assigned a minimum value of 50. The ranking position is taken from the opponents’ ranking in the most recently published FIFA/Coca-Cola World Ranking.

    • Given that the method for ranking teams is more qualitative than quantitative this statistical calculation is highly suspect and open to significant interpretation/influence outside the bounds of objectivity.
    • And we’ve already seen how objectivity can be manipulated with the selection of Qatar hosting the 2022 World Cup.
    • If no values are attached to Friendlies then this strength of Opponent has no relevance until the World Cup; the only time teams meet in a competition that actually has real value…

C: Strength of confederation

When calculating matches between teams from different confederations, the mean value of the confederations to which the two competing teams belong is used. The strength of a confederation is calculated on the basis of the number of victories by that confederation at the last three FIFA World Cup™ competitions (see following page). Their values are as follows:

  • CONCACAF 0.88
  • AFC/CAF 0.86
  • OFC 0.85
    • How were these percentages developed and when, and how often, are they updated?
    • Again, to be redundant here, because I think it’s important to minimize internal/external influence in judging the effective performance of a team, this category, in the calculation gives the impression of adding a ‘fudge-factor’.
    • A more quantitative approach would eliminate the need for this “strength of Confederation”…
    • The less subjective influence FIFA has on the Confederation and World Ranking systems the better…

Final thoughts on the current FIFA approach:

  • As much as there are ‘numbers’ involved, this approach really is tainted with subjectivity.

Moving on to my Possession with Purpose Index – specifically the one resulting from the 2014 World Cup:


There are considerable differences, even without the final two games being played…

  • The most glaring difference between the two Indices/Rankings is the inclusion of Ukraine, Denmark, Slovenia, Scotland, Romania, and Serbia in the FIFA Top 30, while Nigeria, Korea, Ghana, Cameroon, Iran and Australia are excluded.
  • Note, since the date of the FIFA Rankings is June 2014 there was plenty of time for FIFA to ask themselves why teams that made the World Cup did not make the Top 30 and teams that didn’t make the World Cup made the Top 30.
  • Is it really a relevant Ranking system if there are teams in the top 30 who didn’t make the World Cup and teams outside the top 30 that did make the World Cup?
  • If a team is strong enough to qualify, from within their Confederation, then shouldn’t they, by rights, be in Top 30 of the FIFA World Rankings?
  • Is there supposed to be a ‘good feeling’ for a Nation to have a team in the Top 30 that didn’t make the World Cup?
  • What is the intent of the FIFA World Rankings anyway?  If it’s strictly for “seeding purposes” wouldn’t it be reasonable that the teams competing in the tournament are the only teams to appear in the Top 30/32?
  • And why a Top 30; why not a top 32?
  • If you exclude Friendlies from the calculation what does the FIFA World Ranking Index look like?

I wonder how quickly the table adjusts from month to month?

  • If the FIFA World Ranking system does not react quickly to changes in new Head Coaches, or major shifts in player personnel, how effective is it in dropping or raising teams based upon the World Cup?
  • I think, in this day and age, the ability to adjust the ranking of teams should be quicker and have less influence based upon past performance and more influence based upon current form; especially with changes in formations, styles, players and Head Coaches.

Finally, it’s worth mentioning again, if FIFA can appear to be ‘bought’ (that’s no longer “an appearance” – it’s FACT) when selecting Qatar for the World Cup in 2022 how reliable (really reliable) is their Index as calculated today?

  • Based on a win/draw (qualitative analyses),
  • Influenced by games that mean nothing (Friendlies), and
  • Influenced by games played four years ago where neither the team nor the Head Coach might be the same?

In Closing:

  • There’s no question that corruption existed, and probably still does, in some fashion or another – when that type of environment exists EVERY path forward should be reviewed to cleanse and objectify rankings for the future.
  • My approach has been published – it is reasonable – accurate – (in some cases extremely accurate) and the rankings in my Indices can show movement up and down the ladder when head coaching changes are made.
  • How a team did three years ago, under one coach, says absolutely nothing about how a team will do under another head coach, three years later.
  • If a national team changes their head coach the team ranking should be scrubbed and reviewed with a new start point somewhere outside the top 30-40… at least that’s an idea…
  • My Index is quantitative – there is no qualitative measurement (judgment) involved – therefore the politics of FIFA will never-ever influence a teams ranking.

If you think it’s time for a change in how FIFA calculates world rankings retweet this article – I’m not saying it’s THE answer but there are more ways (objective ways) to rank teams that completely ignore the almighty dollar bill.

Best, Chris  @chrisgluckpwp

 COPYRIGHT, All Rights Reserved.  PWP – Trademark

MLS Soccer – Fouls in the Defending Third; their potential influence in DPWP and Points in the League Table

As part of my continuing analysis on Major League Soccer, with respect to Possession with Purpose, here’s an interesting view on the relationship between fouls committed in the Defending Final Third versus Defensive Possession with Purpose (DPWP), Points in the League Table, and Composite Possession with Purpose (CPWP)…

Fouls made in the Defending Third

Fouls made in the Defending Third

Teams are ranked from most to least fouls in the defending third with their DPWP, Sum of Points Taken, and CPWP article.

Note that three of the four teams with the fewest points in Major League Soccer also commit the most fouls in their own defending third; Portland, Chivas, and Montreal – and a team that has been taking a slide in the league standings of late, FC Dallas, are also in the top four.

An issue with this table is that the number of games played is not equal – it is what it is.

Note the teams in the bottom half of the table; LA Galaxy, New England, Colorado, Sporting, and Seattle are teams that come to mind who are doing well this year in minimzed fouls as well as good standing in the league table – an odd one out is New York.

Perhaps their lower points total and lower PWP Index ratings are more to do with having average players who are more disciplined in not fouling but less disciplined in good position play?

In other words they are so far out of position that they can’t get close enough to foul in order to shut down their opponent; or, they are so disciplined in not giving away a set-piece/penalty they would rather rely on their keeper to try and make a save or rely on the opponent to ‘miss’?

I’d probably support the later more than the former – but since their back four has been a mish-mash of starters throughout the whole year it’s pretty hard to tell…

In looking from a different point of view; fouls made versus PK’s conceded, Opponent Goals Scored and Goal Differential the overall data still remains compelling – fouling your opponent in your Defending Final Third will negatively impact points in the league table…

Fouls made in the Defending Third with PKs conceded

Fouls made in the Defending Third with PKs conceded

In looking at Portland in particular; clearly the number of fouls conceded in the final third relates to the average number of PK’s conceded this year… (4.21 to .64).

Three other leaders (if you will) in this area are Montreal (3.17 to .33), Houston (2.87 to .40), and New York (2.29 to .43).  Of all these teams all three have negative CPWP Index numbers, (-0.2345 for Montreal), (-0.2741 for Houston), and (-0.0416 for New York).

The odd one out, by a slim margin, is Portland who sits on 0.0616 CPWP; a testament, if you will, in their ability to score goals…. if only they could prevent goals better.

The most compelling evidence to me however, is not pictured, the Correlation of Fouls committed in the Defending Final Third to Opponent Goals Scored is .6146 and the Correlation to Goal Differential is -.5267.

In other words there is a strong relationship between fouls committed in the Defending Final Third and Goals conceded…

Of interest for me is that the relationship also translates back to DPWP and CPWP; the correlation of Fouls conceded in the Defending Final Third to DPWP is .5495 while the Correlation to CPWP is -0.4853.

Not as strong as the league table correlations but enough of a correlation to reinforce that the PWP Indices have relevance to points in the league table without including (points) in the analysis that creates the Indices of team performance.

In closing…

Fouling your opponent in your own back yard hurts – it not only hurts team performance it also hurts in the league standings…

Those teams that do this regularly don’t appear to do well (based upon both views of data – quantitative and qualitative) in Major League Soccer…

Best, Chris

Passing – An oddity in how it’s measured in Soccer (Part II)

If you read my initial article on “Passing – An oddity in how it’s measured in Soccer Part I“; I hope you find this article of value as well as the onion gets peeled back a bit further to focus on Crosses.

To begin please consider the different definitions of passing identified in Part I and then take some time to review these two additional articles (Football Basics – Crossing) & (Football Basics – The Passing Checklist) published by Leo Chan – Football Performance Analysis, adding context to two books written by Charles Hughes in 1987 (Soccer Tactics and Skills) and 1990 (The Winning Formula). My thanks to Sean McAuley, Assistant Head Coach for the Portland Timbers, for providing these insightful references.

In asking John Galas, Head Coach of newly formed Lane United FC in Eugene, Oregon here’s what he had to offer:

“If a cross isn’t a pass, should we omit any long ball passing stats? To suggest a cross is not a pass [is] ridiculous, it is without a doubt a pass, successful or not – just ask Manchester United, they ‘passed’ the ball a record 81 times from the flank against Fulham a few weeks back.”

In asking Jamie Clark, Head Coach for Soccer at the University of Washington these were his thoughts…

“It’s criminal that crosses aren’t considered passing statistically speaking. Any coach or player knows the art and skill of passing and realizes the importance of crossing as it’s often the final pass leading to a goal. If anything, successful passes should count and unsuccessful shouldn’t as it’s more like a shot in many ways that has, I’m guessing, little chance of being successful statistically speaking yet necessary and incredibly important.”

Once you’ve taken the time to read through those articles, and mulled over the additional thoughts from John Galas and Jamie Clark, consider this table.

Stat Golazo/MLS STATS Squawka Whoscored MLS Chalkboard My approach Different (Yes/No)?
Total Passes 369 356 412 309+125 = 434 309+125+9=443 Yes
Total Successful Passes 277 270 305 309 309 + 9 = 318 Yes
Passing Accuracy 75% 76% 74% NOT OFFERED 71.78% Yes
Possession Percentage 55.30% 53% 55% NOT OFFERED 55.93% Yes
Final Third Passing Accuracy 89/141= 63.12% NOT OFFERED NOT OFFERED FILTER TO CREATE 92/140 = 65.71% Yes
Total Crosses 35 vs 26 (MLS Stats) NOT OFFERED 35 35 35 No
Successful Crosses 35*.257=9 NOT OFFERED 9 9 9 No
* NOTE: MLS Chalkboard includes unsuccessful crosses as part of their unsuccessful passes total but does not include successful crosses as part of their total successful passes; it must be done manually.

For many, these differences might not mean very much but if looking for correlations and considering R-squared values that go to four significant digits these variations in datum might present an issue.

I don’t track individual players but Harrison and Matthias do, as does Colin Trainor, who offered up a great comment in the Part I series that may help others figure out where good individual data sources might come from.

What’s next?

My intent here is not to simply offer up a problem without a solution; I have a few thoughts on a way forward but before getting there I wanted to offer up what OPTA responded with first:

I (OPTA representative) have has (had) a word with our editorial team who handle the different variables that we collect. There is no overlay from crosses to passes as you mentions, they are completely different data variables. This is a decision made as it fits in with the football industry more. Crosses are discussed and analysed as separate to passes in this sense. We have 16 different types of passes on our F24 feed in addition to the cross variable.

So OPTA doesn’t consider a cross a pass – they consider it a ‘variable’?!?

Well I agree that it is a variable as well and can (and should) be tracked separately for other reasons; but for me it’s subservient to a pass first and therefore should be counted in the overall passing category that directly influences a teams’ percentage of possession. Put another way; it’s a cross – but first and foremost it’s a pass.

(Perhaps?) OPTA (PERFORM GROUP now) and others in the soccer statistics industry may reconsider how they track passes?

I am also hopeful that OPTA might create a ‘hot button’ on the MLS Chalkboard that allows analysts the ability to filter the final third consistently, from game to game to game, as an improvement over the already useful ‘filter cross-hairs’…

In closing…

My intent is not to call out any statistical organizations but to offer up for others, who have a passion for soccer analyses, that there are differences in how some statistics can be presented, interpreted and offered up for consideration. In my own Possession with Purpose analysis every ball movement from one player to another is considered in calculating team passing data.

Perhaps this comparison is misplaced, but would we expect the NFL to call a ‘screen pass’ a non-pass and a variation of a pass that isn’t counted in the overall totals for a Team and Quarterback’s completion rating?

Here’s a great exampleon how Possession Percentage is being interpreted that might indicate a trend.

Ben has done some great research and sourced MLS Stats (as appropriate) in providing his data – he’s also offered up that calculating possession is an issue in the analytical field of soccer as well.

In peeling back the data provided by MLS Stats he is absolutely correct that the trend is what it is… When adding crosses and other passing activities excluded by MLS Stats the picture is quite different and lends credence to what Bradley offers.

For example–when adding crosses and other passing activities not included by MLS Stats–the possession percentages for teams change, and the R-squared between points in the league table comes out as 0.353, with only 7 of 8 possession-based teams making the playoffs. New York, with most points, New England and Colorado all had possession percentages last year that fell below 50%, and only one team in MLS last year that didn’t make the playoffs finished with the worst record (16 points) DC United.

For me, that was superb research – a great conclusion that was statistically supported. Yet, when viewed with a different lens on what events are counted as passes, the results are completely different.

All the best,


You can follow me on twitter @chrisgluckpwp