Do government and businesses receive value for money when they attempt to spur innovation through programs and organizations like research networks, science parks, and industry associations? The impact of these “innovation intermediaries” is often very difficult to determine, says Margaret Dalziel, associate professor at the University of Ottawa’s Telfer School of Management. In a panel discussion at the 3rd Canadian Science Policy Conference in Ottawa on November 17, Dalziel noted that while there has been a renewed focus on intermediaries in recent years, amid heightened concerns for transparency and accountability, the efforts to gauge their impacts to date have had mixed results.
“The impact of these intermediaries can be positive, negative, or not significant,” says Dalziel, an expert on innovation and entrepreneurship who is researching how best to measure the effects of the intermediaries. “For example, one study showed the U.S. Small Business Innovation Research program had positive impact on revenues, employment and venture capital financing but another study found no impact on employment or investment in R&D.”
In the 1990s, studies of Sematech, a large U.S. R&D consortium, revealed positive impacts on generic technology and industry infrastructure, but negative impacts on R&D spending. Similarly, researchers have produced differing assessments of the impact of science parks and industry associations.
“It’s difficult to measure impact for several reasons,” Dalziel concludes. Every firm is different, there is a time lag between engagement with the intermediary and the firm outcomes, and it’s hard to distinguish between what are referred to as ‘selection effects’ and ‘treatment effects.’
“In other words, does engagement with intermediary X lead to high firm performance or does intermediary X engage with high performing firms?” says Dalziel. “In addition, an innovation intermediary’s activities affect firm resources and capabilities, but data on these characteristics is hard to come by. If data on firm performance is used to measure an intermediary’s impact, one has to control for other factors that affect firm performance.”
Dalziel says measuring what’s relevant, not what’s convenient, will be important in improving assessments. Researchers should use firm-level data, consider multiple dimensions of impact, and leverage the ability of executives to judge whether or not the intermediary contributed to a specific outcome.
She also cautioned that measuring performance shouldn’t detract from achieving performance, underlining the need to be efficient in measuring impact.
“Researchers have made promising strides in improving impact assessment, and reliable, relevant and actionable metrics are on the horizon.”