Skip to main content

In the famous 2003 book (and subsequent movie adaptation starring Brad Pitt) “Moneyball” author and journalist Michael Lewis unpacked the improbable success of the Oakland Athletics baseball team, based on using statistics and analytics to outwit better funded competitors: recruiting players poorly rated according to the criteria used by experienced scouts, but whose stats indicated they could perform better than what the eye and traditional views seemed to indicate. Moneyball triggered a wave of interest into analytics and data mining that expanded way beyond talent spotting in professional sports.

This book was one of many triggers that pointed us to use evidence-based analytics to inform and improve decision-making, over and above the tacit knowledge accumulated over years of practice framed in a particular worldview. The transformational power of analytics leads smart decision makers to challenge their own assumptions and opens-up the range of possibilities they consider in making a strategic decision. This was a very powerful insight, so much so that many organizations from professional sports franchises to investment bankers, media conglomerates, supply-chain specialists, food processing giants, and governments took up the idea.

In the 2016 follow-up book, “The Undoing Project” Lewis goes one level further: in the introductory chapter, he details the learning journey of successful Houston Rockets NBA General Manager Daryl Morey. Morey had built a game-changing analytics database, and he was attempting to change the mindset of his scouting team in order to enhance the Rockets’ probability of drafting high-potential basketball players. And for a few years, Morey’s analytics gave the Rockets a small, but consequential advantage, until other teams started to copy their approach. And even then, there were some odd cases of talent recruitment success and failure, that led Morey to challenge the wisdom of the tool he had worked so hard to develop, only to discover that the very assumptions that he had made about what made a professional athlete were as biased as the conventional wisdom of the scouts, only in a more subtle way, and that the quest for more detailed stats about athletes’ performances from measurable outcomes (number of points, or blocks) to measures of performance (points per minute on the floor) to predictors of these (comparative explosiveness of the player in their first two steps) was leading to a never-ending analytical maze of rabbit warrens.

So what’s the wisdom of this story for complex project management? To begin with, the first lesson learned, and that should not be a surprise, is that we should be careful about which KPIs we set for projects. The second one is more subtle and more challenging. On the one hand, the rise of analytics, artificial intelligence, and associated tools provides an opportunity to mitigate our own unconscious decision-making biases and overall reduce the number of sub-par decisions we make. On the other hand, we should be healthily critical about the accuracy and reliability of the algorithms: these were written by human beings and reflect our own biases, only magnified by the computational power of the machines and models we use.

And some might say that this is only a temporary phase: machine learning will enable us to overcome these imperfections as the algorithms perfect themselves. To me this reveals a hopeful and somewhat naïve perspective about knowledge and technology development that overstates potential advances, and ignores the fact that the very code that drives machine learning, by conception, reflects our human biases. There is now well established evidence that the underlying code for some ubiquitous machine-driven devices are biased (e.g. automated soap dispensers work less well if your skin has a darker pigmentation). And yes, we will improve how analytics and algorithms perform over time: but only through the reflective and critical input of human actors.

I would like to temper the enthusiasm of the AI bandwagon: yes, AI, machine learning and analytics have a contribution to make. And this should be instantiated in decision-making scenarios were human decision-makers are presented with evidence-based scenarios about the consequences of their decisions. Beyond mundane data-based situations (e.g. how much should I pay for a bundle of groceries at the check-out?) at the present time, there is no evidence to support that machine based decisions are better. I mark my words: at the current time, the idea of handing over the leadership of complex initiatives to algorithm-driven machine learning engines sounds misguided and dangerous. Do you really want your multi-billion project contract dispute resolution process to be settled by a machine? At the present time, delivering benefits and outcomes for stakeholders in complex projects requires sophisticated decision-making and stakeholder engagement capabilities that exceed piloting an autonomous vehicle on the highway by orders of magnitude.

So what’s the healthy approach? I welcome the increase in computational power and analytics that have become available to decision-makers. But only if this is a valued input in the decision-making process, as opposed to handing decisions over to the system.

Stephane Tywoniak, Phd
Is the Academic Director of the Telfer School of Management,
Executive Master of Business in Complex Project Leadership, Program.

Advances in Complex Project Leadership: Setting Conditions for Project Success in a Complex Environment

Industry experts share global best practises in the areas of; Business Transformation, Innovation, IT, R & D, Operations, and Acquisition Strategies.

Save the date - May 8th, 2019 MBCPL Seminar: Advances in Complex Project Leadership

© 2020 Telfer School of Management, University of Ottawa
Policies  |  Emergency Info

alert icon