I saw that on Twitter when he posted it. Interesting stuff, but I'm not sure it has broad implications.
First, obviously, it's just 2 data points from 1 team.
Second, he's relying on a xGD to GD comparison which combines offensive and defensive over or under performance. I checked the Vieira start, and our underperformance then was all due to defense: In the first 15 games (the bad stretch as I mark it) in 2016 NYC scored 22 goals compared to 21.1 xGoals, but allowed 28 Goals compared to 20.7 XGoals against. So that has little to do with my point about the difference between creating more good chances and finishing rate. I also don't think our defense in that stretch was "unlucky." I think the xGoal metric probably underweights the ease of scoring undefended on the keeper after a giveaway playing from the back, which we allowed repeatedly during that stretch. Eventually we improved that and our "luck" also improved. Overall, I'm still of the mind that variances between actual goals and xGoals is a combination of luck and team specific factors which xGoals fails to capture.
Finally, I find it interesting that his NYCFC chart runs counter to one of the favorite anecdote types used by soccer advanced stat proponents. They commonly note that coaches or managers are often fired after a prolonged spell of XG underperformance, and the new coach often enough benefits from the inevitable swing of the chart in the other direction. So it looks like the change worked even though it was just a swing of random chance. And they can easily find individual data points to support this theme. Again, the NYC graph is just 2 data points the other direction, but I'd be interested in a study that goes beyond anecdotal data and covers numerous coaching changes across leagues and years to see how this tends to play out.