Saturday, August 11, 2012

The Reality of Strategy: The Case of London Business School

When Laura d’Andrea Tyson was the Dean of London Business School – some years ago – she put together a committee to examine and reformulate the School’s strategy. Several professors sat on that committee. When I once asked her, having a drink at her home, why none of them were Strategy professors, she looked at me for about 5 seconds baffled. Eventually, she stammered, “yes, perhaps we could consider that in the future….”.

It was clear to me, from her stunned silence (and she wasn’t easily lost for words), that she had never even considered the thought before.

I, in contrast, thought it wasn’t such an alien idea; putting some strategy professors on the School’s strategy-making committee. We had – and still have – people in our Strategy department (e.g. Costas Markides, Sumantra Ghoshal) who not only had dozens of top academic publications behind their names but who also had an eager ear amongst strategy practitioners, through their Harvard Business Review publications and hundreds of thousands of business books sold (not to mention their fairly astronomical consulting fees).

Today, our current Dean – Sir Andrew Likierman – is working with a group of people on a huge strategic growth decision for the School, namely the acquisition of a nearby building from the local government that would increase our capacity overnight with about 70 percent. Once more, strategy professors have no closer role in the process than others; their voice is as lost in the quackmire as anyone else’s.

If Sir Andrew had been an executive MBA student in my elective (“Strategies for Growth”) writing an essay about the situation, I would ask him for a justification of the need for growth given the characteristics of the market; I’d ask him about the various options for growth (geographic expansion, e.g. a campus abroad; related diversification, e.g. on-line space, etc.), and how an analysis of the organisation’s resources and capabilities is linked to these various options, and so on. But a systematic analysis based on what we teach in our own classrooms and publish in our books and journals has, it seems, not even be considered.

And I genuinely wonder why that is? Because it is not only strategy professors and it is not only deans. Whenever the topic of the School’s brand name comes up, no-one seems inclined to pay more attention to our Marketing professors (some of whom are true heavyweights in the field branding) than to the layman’s remarks of Economics or Strategy folk. When the School’s culture and values are being assessed, Organizational Behaviour professors are conspicuously absent from the organising committee (ironically it was run by a Marketing guy); likewise for Economics and OB professors when we are discussing incentives and remuneration. So why is that?

Is it that deep down we don’t actually believe what we teach? Or is it that we just don’t believe what any of our colleagues in other departments teach…? And that it could be somehow relevant to practice – including our own? Why do we charge companies and students small – and not so small – fortunes to take our guidance on how to make strategy, brands, and remuneration systems only to see that when our own organisation is dealing with them it all goes out the door?

I guess I simply don’t understand the psychology behind this. Wait… perhaps I should go ask my Organizational Behaviour professors down the corridor!



Since writing the piece above – perhaps not surprisingly; although it took me a bit by surprise (I didn’t think anyone actually read that stuff) – Sir Andrew contacted me. One could say that he took the oral exam following his essay on the School’s growth plans and passed it (with a distinction!)

In all seriousness, in hindsight, I think I was unfair to him – perhaps even presumptuous. I wrote “a systematic analysis based on what we teach in our own classrooms and publish in our books and journals has, it seems, not even be considered” and, now, I think I should not have written this. That I haven’t been involved in the process much and therefore have not seen the analysis of course does not mean it was never conducted. And it is a bit unfair, from the sidelines, to throw in a comment like that when someone has put in so much careful work. I apologise!

In fact, although Sir Andrew never lost his British cool, charme and good sense of humour, I realise it must actually have been “ever so slightly annoying” for him to read that comment, especially from a colleague, and he doesn’t deserve that. So: regarding the specifics of this example: forget it! ban it from your minds, memory, bookmarks and favourites (how would this Vermeulen guy know?! he wasn’t even there!)!

That you should pay more attention to Marketing professors when considering your school’s brandname, more attention to your OB professors when considering your incentive systems and values, more attention to Finance professors when managing your endownment and, God forbid, sometimes even to some strategy professors when considering your school’s strategy, I feel, does stand – so don’t throw out the baby with the bathwater just yet. But, yes, do get rid of that stinky bathwater.

Monday, July 9, 2012

Strategy is a Story

Stevie Spring, who recently stepped down after a successful stint as CEO of Future plc, the specialty magazine publisher, once told me, “I am not really the company’s CEO; what I really am is its Chief Story Teller.”

What she meant is that she believed that telling a story was her most important task as a CEO. Actually, she insisted, her job was to tell the same story over and over again. And when she said ‘a story’, she meant that her job was to tell her representation of the company’s strategy: the direction she wanted to take the business and how that was going to make it prosper and survive. She felt that a good CEO should tell that kind of story repeatedly, to all employees, shareholders, fund managers and analysts. For, indeed, a good strategy does tell a story.

All successful CEOs whom I have seen were great storytellers. Not necessarily because of their oratorical skills, but because the characteristics of the strategy they had put together lent themselves to being told like a story — and a good one too! The most important thing for a CEO to do is to provide a coherent, compelling strategic direction for the company, one that is understood by everyone who has to contribute to its achievement. For that, a story must be told.

When I say this, I am not implying that CEOs need to engage in fiction, nor do they need to be overly dramatic. In my view, a good business strategy story has three characteristics.

First, the story must provide clear choices.
Stevie Spring’s choices were as clear as her forthright language: “We provide specialty magazines, for young males, in British.” Hence, it was clear what was out; there were to be no magazines on, say, ‘music’ (that is too broad), no magazines in German (although that could be a perfectly profitable business for someone else) and no magazines on pottery or vegetable gardens (unless that has recently seen a surge in popularity among young males in the UK without my knowing it). A good strategy story has to contain such a set of genuine choices.

Moreover, it has to be clear how the choices made by the company’s leaders hang together. For example, Frank Martin, who as a CEO orchestrated the revival of the British model-train maker, Hornby, by turning it from a toy company into a hobby company, put his strategy story in just 15 words. “We make perfect scale models for adult collectors, which appeal to some sense of nostalgia.” He decided to focus on making perfect scale models because that is what collectors look for. Moreover, people would usually specifically collect the Hornby brand because it reminded them of their childhood, and with it a nostalgic, foregone era. Frank Martin’s choices were not just a bunch of disconnected strategic decisions; they hung together, and, combined, made for a logical story.

Second, the story must tie to the company’s resources.
Importantly, the set of choices has to be clearly linked to the company’s unique resources, those that can give them a competitive advantage in an attractive segment of the market. Although Hornby had been hovering on the brink of bankruptcy for a decade, it still had some valuable resources. First of all, it possessed a valuable brand that was very well-known and appreciated by people who had owned a Hornby train as children.

Additionally, the company had a great design capability in its hometown of Margate. However, these resources weren’t worth much when competing with the cheaper Chinese toy makers. The children who wanted a toy train for their birthday didn’t know (and could care less) about the Hornby brand. The precision modelling skills of the engineers in Margate weren’t of much value in the toy segment, where things mostly had to be robust and durable. However, these two resources — an iconic brand and a design capability — were of considerable value when making ‘perfect scale models for adult collectors’. It was a perfect match of existing resources to strategy.

I observed a similar thing at the Sadler’s Wells theatre. Ten years ago, before the current CEO Alistair Spalding took over, the theatre put on all sorts of grand shows in various performing arts. Yet, the company was in dire straits, losing money evening-on-evening and by the bucket. Then, Spalding took over and highlighted his leadership with a clear story. He started telling everyone that the theatre was destined ‘to be the centre of innovation in dance’.

He did this because the company was blessed with two valuable resources: (1) an historic reputation for dance (although it had diversified outside dance in the preceding years) and (2) a theatre once designed specifically with dance in mind. Spalding understood that, with these unique resources, he needed to focus the theatre on dance again. Beyond that, he made it the spider in the web, a place where various innovative people and dance forms came together to create new art, a place where stars were formed.

Third, the story must explain a competitive advantage.
The story must not only provide choices that are linked to resources, it must also explain how these choices and resources are going to give the company a competitive advantage in an attractive market, one that others can’t easily emulate. For example, Hornby’s resources enabled it to make perfect scale models for adult collectors better than anyone else, but those adult collectors also happened to form a very affluent and growing segment, one in which margins were much better than in the super-competitive toy market. It isn’t much good to have a competitive advantage in a dying market; you want to be able to do something better than anyone else in a market that will make you grow and prosper.

Thus, it has to be clear from your strategy story why the market is attractive and how the resources are going to enable you to capture the value in that market better than anyone else. The story of the CEO of Fremantle Media, Tony Cohen, for example, was that his company was going to make television productions that were replicable in other countries, with spillovers into other media. Because of their worldwide presence, Fremantle Media were better than their national competitors at rolling out productions such as the X-factor, Pop Idols, game shows and sitcoms. While their local competitors could also develop attractive and innovative shows, Fremantle’s multinational’s presence enabled it to reap more value from them. Therefore, that’s what they focused upon: shows that they could replicate across the globe. It was their competitive advantage, and they built their story around it.

Of course, a good story alone is not enough. A leader still needs good products, people, marketing, finance and so on. But, without a good story, a leader will find it impossible to combine people and resources into a forceful strategic thrust. A good story is a necessary — although, alone, not sufficient — condition for success.

My message for leaders: if you get your story right, it can be a very powerful management tool indeed. It works to convince analysts, shareholders and the public that where you are taking the company is worth everyone’s time, energy and investment.

Perhaps even more importantly, it can provide inspiration to the people who will have to work with and implement the strategy. If employees understand the logic behind a company’s strategic choices and see how it might give the company a sustainable advantage over its competitors, they will soon believe in it. They will soon embrace it. And they will soon execute it. Collective belief is a strong precursor of success. Thus, a good story can spur a company forward and eventually make the story come true.

Tuesday, June 12, 2012

The Translation Fallacy

If you have ever been unlucky enough to attend a large gathering of strategy academics – as I have, many times – it may have struck you that at some point during such a feast (euphemistically called “conference”), the subject matter would turn to talks of “relevance”. It is likely that the speakers were a variety of senior and grey – in multiple ways – interchanged with aspiring Young Turks. A peculiar meeting of minds, where the feeling might have dawned on you that the senior professors were displaying a growing fear of bowing out of the profession (or life in general) without ever having had any impact on the world they spent a lifetime studying, while the young assistant professors showed an endearing naivety believing they were not going to grow up like their academic parents.

And the conclusion of this uncomfortable alliance – under the glazing eyes of some mid-career, associate professors, who could no longer and not yet care about relevance – will likely have been that “we need to be better at translating our research for managers”; that is, if we’d just write up our research findings in more accessible language, without elaborating on the research methodology and theoretical terminology, managers would immediately spot the relevance in our research and eagerly suck up its wisdom.

And I think that’s bollocks.

I don’t think it is bollocks that we – academics – should try to write something that practicing managers are eager to read and learn about; I think it is bollocks that all it needs is a bit of translation in layman’s terms and the job is done.

Don’t kid yourself – I am inclined to say – it ain’t that easy. In fact, I think there are three reasons why I never see such a translation exercise work.

1. Ignorance

I believe it is an underestimation of the intricacies of the underlying structure of a good managerial article, and the subtleties of how to convincingly write for practicing managers. If you’re an academic, you might remember that in your first year as a PhD student you had the feeling it wasn’t too difficult to write an academic article such as the ones you had been reading for your first course, only to figure out, after a year or two of training, that you had been a bit naïve: you had been (blissfully) unaware of the subtleties of writing for an academic journal; how to structure the arguments; which prior studies to cite and where; which terminology to use and what to avoid; and so on. Well, good managerial articles are no different; if you haven’t developed the skill yet to write one, you likely don’t quite realise what it takes.

2. False assumptions

It also seems that academics, wanting to write their first managerial piece, immediately assume they have to be explicitly prescriptive, and tell managers what to do. And the draft article – invariably based on “the five lessons coming out of my research” – would indeed be fiercely normative. Yet, those messages often seem impractically precise and not simple enough (“take up a central position in a network with structural holes”) or too simple to have any real use (“choose the right location”). You need to capture a busy executive’s attention and interest, giving them the feeling that they have gained a new insight into their own world by reading your work. If that is prescriptive: fine. But often precise advice is precisely wrong.

3. Lack of content

And, of course, more often than not, there is not much worth translating… Because people have been doing their research with solely an academic audience in mind – and the desire to also tell the real world about it only came later – it has produced no insight relevant for practice. I believe that publishing your research in a good academic journal is a necessary condition for it to be relevant; crappy research – no matter how intriguing its conclusions – can never be considered useful. But rigour alone, unfortunately, is not a sufficient condition for it to be relevant and important in terms of its implications for the world of business.

Monday, June 4, 2012

“Can’t Believe It" 2

My earlier post – “can’t believe it” – triggered some bipolar comments (and further denials); also to what extent this behaviour can be observed among academics studying strategy. And, regarding the latter, I think: yes.
The denial of research findings obviously relates to confirmation bias (although it is not the same thing). Confirmation bias is a tricky thing: we – largely without realising it – are much more prone to notice things that confirm our prior beliefs. Things that go counter to them often escape our attention.

Things get particularly nasty – I agree – when we do notice the facts that defy our beliefs but we still don’t like them. Even if they are generated by solid research, we’d still like to find a reason to deny them, and therefore see people start to question the research itself vehemently (if not aggressively and emotionally).

It becomes yet more worrying to me – on a personal level – if even academic researchers themselves display such tendencies – and they do. What do you think a researcher in corporate social responsibility will be most critical of: a study showing it increases firm performance, or a study showing that it does not? Whose methodology do you think a researcher on gender biases will be more inclined to challenge: a research project showing no pay differences or a study showing that women are underpaid relative to men?

It’s only human and – slightly unfortunately – researchers are also human. And researchers are also reviewers and gate-keepers of the papers of other academics that are submitted for possible publication in academic journals. They bring their biases with them when determining what gets published and what doesn’t.

And there is some evidence of that: studies showing weak relationships between social performance and financial performance are less likely to make it into a management journal as compared to a finance journal (where more researchers are inclined to believe that social performance is not what a firm should care about), and perhaps vice versa.

No research is perfect, but the bar is often much higher for research generating uncomfortable findings. I have little doubt that reviewers and readers are much more forgiving when it comes to the methods of research that generates nicely belief-confirming results. Results we don’t like are much less likely to find their way into an academic journal. Which means that, in the end, research may end up being biased and misleading.

Thursday, May 24, 2012

“Can’t Believe It” (we deny research findings that defy our beliefs)

So, I have been running a little experiment on twitter. Oh well, it doesn’t really deserve the term “experiment” – at least in an academic vocabulary – because there certainly are no treatment effects or control groups. It does deserve the term “little” though, because there are only four observations.

My experiment was to post a few recent findings from academic research that some might find mildly controversial or – as it turns out – offending. These four hair raising findings were 1) selling junk food in schools does not lead to increased obesity, 2) family-friendly workplace practices do not improve firm performance (although they do not decrease them either), 3) girls take longer to heal from concussions, 4) firms headed up by CEOs with broader faces show higher profitability.

Only mildly controversial I’d say, and only to some. I was just curious to see what reactions it would trigger. Because I have noticed in the past that people seem inclined to dismiss academic evidence if they don’t like the results. If the results are in line with their own beliefs and preconceptions, its methods and validity are much less likely to be called stupid.  

Selling junk food in schools does not lead to increased obesity is the finding of a very careful study by professors Jennifer Van Hook and Claire Altman. It provides strong evidence that selling junk food in schools does not lead to more fat kids. One can then speculate why this is – and their explanation that children’s food patterns and dietary preferences get established well before adolescence may be a plausible one – but you can’t deny their facts. Yet, it did lead to “clever” reactions such as “says more about academic research than junk food, I fear...”, by people who clearly hadn’t actually read the study.

Family-friendly workplace practices do not improve firm performance is another finding that is not welcomed by all. This large and competent study, by professors Nick Bloom, Toby Kretschmer and John van Reenen, was actually read by some, be it clearly without a proper understanding of its methodology (which, indeed, it being an academic paper, is hard to fully appreciate without proper research methodology training). It led to reactions that the study was “in fact, wrong”, made “no sense”, or even that it really showed the opposite; these silly professors just didn’t realise it.

Girls take longer to heal from concussions is the empirical fact established by Professor Tracey Covassin and colleagues.. Of course there is no denying that girls and boys are physiologically different (one cursory look at my sister in the bathtub already taught me that at an early age), but the aforementioned finding still led to swift denials such as “speculation”!

That firms headed up by CEOs with broader faces achieve higher profitability – a careful (and, in my view, quite intriguing) empirical find by my colleague Margaret Ormiston and colleagues – triggered reactions such as “sometimes a study tells you more about the interests of the researcher, than about the object of the study” and “total nonsense”. 

So I have to conclude from my little (academically invalid) mini-experiment that some people are inclined to dismiss results from research if they do not like them – and even without reading the research or without the skills to properly understand it. In contrast, other, nicer findings that I had posted in the past, which people did want to believe, never led to outcries of bad methodology and mentally retarded academics and, in fact, were often eagerly retweeted.

We all look for confirmation of our pre-existing beliefs and don’t like it much if these comfortable convictions are challenged. I have little doubt that this also heavily influences the type of research that companies conduct, condone, publish and pay attention to. Even if the findings are nicer than we preconceived (e.g. the availability of junk food does not make kids consume more of it), we prefer to stick to our old beliefs. And I guess that’s simply human; people’s convictions don’t change easily.

Thursday, May 10, 2012

Let’s face it: in most industries, firms pretty much do the same thing

In the field of strategy, we always make a big thing out of differentiation: we tell firms that they have to do something different in the market place, and offer customers a unique value proposition. Ideas around product differentiation, value innovation, and whole Blue Oceans are devoted to it. But we also can’t deny that in many industries – if not most industries – firms more or less do the same thing.

Whether you take supermarkets, investment banks, airlines, or auditors, what you get as a customer is highly similar across firms.

1. Ability to execute: What may be the case, is that despite doing pretty much the same thing, following the same strategy, there can be substantial differences between the firms in terms of their profitability. The reason can lie in execution: some firms have obtained capabilities that enable them to implement and hence profit from the strategy better than others. For example, Sainsbury’s supermarkets really aren’t all that different from Tesco’s, offering the same products at pretty much the same price in pretty much the same shape and fashion in highly identical shops with similarly tempting routes and a till at the end. But for many years, Tesco had a superior ability to organise the logistics and processes behind their supermarkets, raking up substantially higher profits in the process.

2. Shake-out: As a consequence of such capability differences – although it can be a surprisingly slow process – due to their homogeneous goods, we may see firms start to compete on price, margins decline to zero, and the least efficient firms are pushed out of the market. And one can hear a sigh of relief amongst economists: “our theory works” (not that we particularly care about the world of practice, let alone be inclined to adapt our theory to it, but it is more comforting this way).

3. A surprisingly common anomaly? But it also can’t be denied that there are industries in which firms offer pretty much the same thing, have highly similar capabilities, are not any different in their execution, and still maintain ridiculously high margins for a sustained period of time. And why is that? For example, as a customer, when you hire one of the Big Four accounting firms (PwC, Ernst & Young, KPMG, Deloitte), you really get the same stuff. They are organised pretty much the same way, they have the same type of people and cultures, and have highly similar processes in place. Yet, they also (still) make buckets of money, repeatedly turning and churning their partners into millionaires.

“But such markets shouldn’t exist!” we might cry out in despair. But they do. Even the Big Four themselves will admit – be it only in covert private conversations carefully shielding their mouths with their hands – that they are really not that different. And quite a few industries are like that. Is it a conspiracy, illegal collusion, or a business X file?

None of the above I am sure, or perhaps a bit of all of them… For one, industry norms seem to play a big role in much of it: unwritten (sometimes even unconscious), collective moral codes, sometimes even crossing the globe, in terms of how to behave and what to do when you want to be in this profession. Which includes the minimum price to charge for a surprisingly undifferentiated service.

Monday, April 2, 2012

A good fight clears the mind: On the value of staging a debate

I always enjoy witnessing a good debate. And I mean the type of debate where one person is given a thesis to defend, while the other person speaks in favour of the anti-thesis. Sometimes – when smart people really get into it – seeing two debaters line up the arguments and create the strongest possible defence can really clarify the pros and cons in my mind and hence make me understand the issue better.

For example – be it one in a written format – recently my good friend and colleague at the London Business School, Costas Markides, was asked by Business Week to debate the thesis that “happy workers will produce more and do their jobs better”. Harvard’s Teresa Amabile and Steven Kramer had the (relatively easy) task of defending the “pro”. I say relatively easy, because the thesis seems intuitively appealing, it is what we’d all like to believe, and they have actually done ample research on the topic.

My poor London Business School colleague was given the hapless task to defend the “con”: “no, happy workers don’t do any better”. Hapless indeed.

In fact, in spite of receiving some hate mail in the process, I think he did a rather good job. I am giving him the assessment “good” because indeed he made me think. He argues that having happy, smiley employees all abound might not necessarily be a good sign, because it might be a signal that something is wrong in your organisation, and you’re perhaps not making the tough but necessary choices.

As said, it made me think, and that can’t be bad. Might we not be dealing with a reversal of cause and effect here? Meaning: well-managed companies will get happy employees, but that does not mean that choosing to make your employees happy as a goal in and of itself will get you a better organisation? At least, it is worth thinking about.

In spite that perhaps to you it might seem a natural thing to have in an academic institution – a good debate – it is actually not easy to organise one in business academia. Most people are simply reluctant to do it – as I found out organising our yearly Ghoshal Conference at the London Business School – and perhaps they are right, because even fewer people are any good at it.

I guess that is because, to a professor, it feels unnatural to adopt and defend just one side of the coin, because we are trained to be nuanced about stuff and examine and see all sides of the argument. It is also true that (the more naïve part of) the audience will start to associate you with that side of the argument, “as if you really meant it”. Many of the comments Costas received from the public were of that nature, i.e. “he is that moronic guy who thinks you should make your employees unhappy”. Which of course is not what he meant at all. Nor was it the purpose of the debate.

Yet, I also think it is difficult to find people willing to debate a business issue because academics are simply afraid to have an opinion. We are not only trained to examine and see all sides of an argument, we are also trained to not believe in something – let alone argue in favour of it – until there is research that produced supportive evidence for it. In fact, if in an academic article you would ever suggest the existence of a certain relationship without presenting evidence, you’d be in for a good bellowing and a firm rejection letter. And perhaps rightly so, because providing evidence and thus real understanding is what research is about.

But, at some point, you also have to take a stand. As a paediatric neurologist once told me, “what I do is part art, part science”. What he meant is that he knew all the research on all medications and treatments, but at the end of the day every patient is unique and he would have to make a judgement call on what exact treatment to prescribe. And doing that requires an opinion.

You don’t hear much opinion coming from the ivory tower in business academia. Which means that the average business school professor does not receive much hate mail. It also means he doesn’t have much of an audience outside of the ivory tower.

Monday, March 19, 2012

Research by Mucking About

I am a long standing fan of the Ig Nobel awards. The Ig Nobel awards are an initiative by the magazine Air (Annals of Improbable Research) and are handed out on a yearly basis – often by real Nobel Prize winners – to people whose research “makes people laugh and then think” (although its motto used to be to “honor people whose achievements cannot or should not be reproduced" – but I guess the organisers had to first experience the “then think” bit themselves).

With a few exceptions they are handed out for real research, done by academics, and published in scientific journals. Here are some of my old time favourites:
• BIOLOGY 2002, Bubier, Pexton, Bowers, and Deeming.“Courtship behaviour of ostriches towards humans under farming conditions in Britain” British Poultry Science 39(4)
• INTERDISCIPLINARY RESEARCH 2002. Karl Kruszelnicki (University of Sydney). “for performing a comprehensive survey of human belly button lint – who gets it, when, what color, and how much”
• MATHEMATICS 2002. Sreekumar and Nirmalan (Kerala Agricultural University). “Estimation of the total surface area in Indian Elephants” Veterinary Research Communications 14(1)
• TECHNOLOGY 2001, Jointly to Keogh (Hawthorn), for patenting the wheel (in 2001), and the Australian Patent Office for granting him the patent.
• PEACE 2000, the British Royal Navy, for ordering its sailors to stop using live cannon shells, and to instead just shout “Bang!”
• LITERATURE 1998, Dr. Mara Sidoli (Washington) for the report “farting as a defence against unspeakable dread”. Journal of analytical psychology 41(2)

To the best of my knowledge, there is (only) one individual who has not only won an Ig Nobel Award, but also a Nobel Prize. That person is Andre Geim. Geim – who is now at the University of Manchester – for long held the habit of dedicating a fairly substantial proportion of his time to just mucking about in his lab, trying to do “cool stuff”. In one of such sessions, together with his doctoral student Konstantin Novoselov, he used a piece of ordinary sticky tape (which allegedly they found in a bin) to peel off a very thin layer of graphite, taken from a pencil. They managed to make the layer of carbon one atom thick, inventing the material “graphene”.

In another session, together with Michael Berry from the University of Bristol, he experimented with the force of magnetism. Using a magnetized metal slab and a coil of wire in which a current is flowing as an electromagnet, they tried to make a magnetic force that exactly balanced gravity, to try and make various objects “float”. Eventually, they settled on a frog – which, like humans, mostly consists of water – and indeed managed to make it levitate.

The one project got Geim the Ig Nobel; the other one got him the Nobel Prize.

“Mucking about” was the foundation of these achievements. The vast majority of these experiments doesn’t go anywhere; some of them lead to an Ig Nobel and makes people laugh; others result in a Nobel Prize. Many of man’s great discoveries – in technology, medicine or art – have been achieved by mucking about. And many great companies were founded by mucking about, in a garage (Apple), a dorm room (Facebook), or a kitchen and a room above a bar (Xerox).

Unfortunately, in strategy research we don’t muck about much. In fact, people are actively discouraged from doing so. During pretty much any doctoral consortium, junior faculty meeting, or annual faculty review, a young academic in the field of Strategic Management is told – with ample insistence – to focus, figure out in what subfield he or she wants to be known, “who the five people are that are going to read your paper” (heard this one in a doctoral consortium myself), and “who your letter writers are going to be for tenure” (heard this one in countless meetings). The field of Strategy – or any other field within a business school for that matter – has no time and tolerance for mucking about. Disdain and a weary shaking of the head are the fates of those who try, and step off the proven path in an attempt to do something original with uncertain outcome: “he is never going to make tenure, that’s for sure”.

And perhaps that is also why we don’t have any Nobel Prizes.

Tuesday, February 21, 2012

“The Best Degree for Start-up Success”

“So you want to start a company. You've finished your undergraduate degree and you're peering into the haze of your future. Would it be better to continue on to an MBA or do an advanced degree in a nerdy pursuit like engineering or mathematics? Sure, tech skills are hugely in demand and there are a few high-profile nerd success stories, but how often do pencil-necked geeks really succeed in business? Aren't polished, suited and suave MBA-types more common at the top? Not according to a recent white paper from Identified, tellingly entitled "Revenge of the Nerds."

Interested? Yes, it does sound intriguing, doesn’t it? It is the start of an article, written by a journalist, based on a report by a company called “Identified”. In the report, you can find that “Identified is the largest database of professional information on Facebook. Our database includes over 50 million Facebook users and over 1.2 billion data points on professionals’ work history, education and demographic data”.

In the report, based on the analysis of data obtained from Facebook, under the header “the best degree for start-up success”, Identified presents some “definitive conclusions” about “whether an MBA is worth the investment and if it really gets you to the top of the corporate food chain”. Let me no longer hold you in suspense (although I think by now you do see this one coming from a mile or two, like a Harry and Sally romance), the definitive conclusion is: “that if you want to build a company, an advanced degree in a subject like engineering beats an MBA any day”.

So I have read the report…

[insert deep sigh]

and – how shall I put it – I have a few doubts… ( = polite English euphemism)

Although Identified has “assembled a world class team of 15 engineers and data scientists to analyse this vast database and identify interesting trends, patterns and correlations” I am not entirely sure that they are not jumping to a few unwarranted conclusions. ( = polite English euphemism)

So, when they dig up from Facebook all the profiles of anyone listed as “CEO” or “founder”, they find that about ¾ are engineers and a mere ¼ are MBAs. (Actually, they don’t even find that, but let me not get distracted here). I have no quibbles with that; I am sure they do find what they find; after all, they do have “a world class team of 15 engineers and data scientists”, and a fact is a fact. What I have more quibbles with is how you get from that to the conclusion that if you want to build a company, an advanced degree in a subject like engineering beats an MBA any day.

Perhaps it may seem obvious and a legitimate conclusion to you: more CEOs have an engineering degree than an MBA, so surely getting an engineering degree is more likely to enable you to become a CEO? But, no, that is where it goes wrong; you cannot draw this conclusion from those data. Perhaps “a world class team of 15 engineers and data scientists [able] to analyse this vast database and identify interesting trends, patterns and correlations” are superbly able at digging up the data for you but, apparently, they are less skilled in drawing justifiable conclusions. (I am tempted to suggest that, for this, they would have been better off hiring an MBA, but will fiercely resist that temptation!)

The problem is, what we call, “unobserved heterogeneity”, coupled with some “selection bias”, finished with some “bollocks” (one of which is not a generally accepted statistical term) – and in this case there is lots of it. For example – to start with a simple one – perhaps there are simply a lot more engineers trying to start a company than MBAs. If there are 20 engineers trying to start a company and 9 of them succeed, while there are 5 MBAs trying it and 3 of them succeed, can you really conclude that an engineering degree is better for start-up success than an MBA?

But, you may object, why would there be more engineers who are trying to start a business? Alright then, since you insist, suppose out of the 10 engineers 9 succeed and out of the 10 MBAs only 3 do, but the 9 head $100,000 businesses and the three $100 million ones? Still so sure that an engineering degree is more useful to “get you to the top of the corporate food chain”? What about if the MBA companies have all been in existence for 15 years while all the engineering start-ups never make it past year 2?

And these are of course only very crude examples. There are likely more subtle processes going on as well. For instance, the same type of qualities that might make someone choose to do an engineering degree could prompt him or her to start a company, however, this same person might have been better off (in terms of being able to make the start-up a success) if s/he had done an MBA. And if you buy none of the above (because you are an engineer or about to be engaged to one) what about the following: people who chose to do an engineering degree are inherently smarter and more able people than MBAs, hence they start more and more successful companies. However, that still leaves wide open the possibility that such a very smart and able person would have been even more successful had s/he chosen to do an MBA before venturing.

I could go on for a while (and frankly I will) but I realise that none of my aforementioned scenarios will be the right one, yet the point is that there might very well be a bit going on of several of them. You cannot compare the ventures started by engineers with the ventures headed by MBAs, you can’t compare the two sets of people, you can’t conclude that engineers are more successful founding companies, and you certainly cannot conclude that getting an engineering degree makes you more likely to succeed in starting a business. So, what can you conclude from the finding that more CEOs/founders have a degree in engineering than an MBA? Well… precisely that; that more CEOs/founders have a degree in engineering than an MBA. And, I am sorry, not much else.

Real research (into such complex questions such as “what degree is most likely to lead to start-up success?) is more complex. And so will likely have to be the answer. For some type of businesses an MBA might be better, and for others an engineering degree. And some type of people might be more helped with an MBA, where other types are better off with an engineering degree. There is nothing wrong with deriving some interesting statistics from a database, but you have to be modest and honest about the conclusions you can link to them. It may sound more interesting if you claim that you find a definitive conclusion about what degree leads to start-up success – and it certainly will be more eagerly repeated by journalist and in subsequent tweets (as happened in this case) – but I am afraid that does not make it so.

Monday, January 23, 2012

Fraud and the Road to Abilene

Over the weekend, an (anonymized) interview was published in a Dutch national newspaper with the three “whistle blowers” who exposed the enormous fraud of Professor Diederik Stapel. Stapel had gained stardom status in the field of social psychology but, simply speaking, had been making up all his data all the time. There are two things that struck me:

First, in a previous post I wrote about the fraud, based on a flurry of newspaper articles and the interim report that a committee examining the fraud has put together, I wrote that it eventually was his clumsiness faking the data that got him caught. Although that general picture certainly remained – he wasn’t very good at faking data; I think I could have easily done a better job (although I have never even tried anything like that, honest!) – but it wasn’t as clumsy as the newspapers sometimes made it out to be.

Specifically, I wrote “eventually, he did not even bother anymore to really make up newly faked data. He used the same (fake) numbers for different experiments, gave those to his various PhD students to analyze, who then in disbelief slaving away in their adjacent cubicles discovered that their very different experiments led to exactly the same statistical values (a near impossibility). When they compared their databases, there was substantial overlap”. Now, it now seems the “substantial overlap” was merely a part of one column of data. Plus, there were various other things that got him caught.

I don’t beat myself too hard over the head with my keyboard about repeating this misrepresentation by the newspapers (although I have given myself a small slap on the wrist – after having received a verbal one from one of the whistlers) because my piece focused on the “why did he do it?” rather than the “how did he get caught”, but it does show that we have to give the three whistle blowers (quite) a bit more credit than I – and others – originally thought.

The second point that caught my attention is that, since the fraught was exposed, various people have come out admitting that they had “had suspicions all the time”. You could say “yeah right” but there do appear to be quite a few signs that various people indeed had been having their doubts for a longer time. For instance, I have read an interview with a former colleague of Stapel at Tilburg University credibly admitting to this, I have directly spoken to people who said there had been rumors for longer, and the article with the whistle blowers suggests even Stapel’s faculty dean might not have been entirely dumbfounded that it had all been too good to be true after all... All the people who admit to having doubts in private state that they did not feel comfortable raising the issue while everyone just seemed to applaud Stapel and his Science publications.

This reminded me of the Abilene Paradox, first described by Professor Jerry Harvey, from the George Washington University. He described a leisure trip which he and his wife and parents made in Texas in July, in his parents’ un-airconditioned old Buick to a town called Abilene. It was a trip they had all agreed to – or at least not disagreed with – but, as it later turned out, none of them had wanted to go on. “Here we were, four reasonably sensible people who, of our own volition, had just taken a 106-mile trip across a godforsaken desert in a furnace-like temperature through a cloud-like dust storm to eat unpalatable food at a hole-in-the-wall cafeteria in Abilene, when none of us had really wanted to go”

The Abilene Paradox describes the situation where everyone goes along with something, mistakenly assuming that others’ people’s silence implies that they agree. And the (erroneous) feeling to be the only one who disagrees makes a person shut up as well, all the way to Abilene.

People had suspicions about Stapel’s “too good to be true” research record and findings but did not dare to speak up while no-one else did.

It seems there are two things that eventually made the three whistle blowers speak up and expose Stapel: Friendship and alcohol.

They had struck up a friendship and one night, fuelled by alcohol, raised their suspicions to one another. And, crucially, decided to do something about it. Perhaps there are some lessons in this for the world of business. For example, Jim Westphal, who has done extensive, thorough research on boards of directors, showed that boards often suffer from the Abilene Paradox, for instance when confronted with their company’s new strategy. Yet, Jim and colleagues also showed that friendship ties within top management teams might not be such a bad thing. We are often suspicious of social ties between boards and top managers, fearful that it might cloud their judgment and make them reluctant to discipline a CEO. But it may be that such friendship ties – whether fuelled by alcohol or not – might also help to lower the barriers to resolving the Abilene Paradox. So perhaps we should make friendships and alcohol mandatory – religion permitting – both during board meetings and academic gatherings. It would undoubtedly help making them more tolerable as well.

Wednesday, January 11, 2012

Bias (or why you can’t trust any of the research you read)

Researchers in Management and Strategy worry a lot about bias – statistical bias. In case you’re not such an academic researcher, let me briefly explain.

Suppose you want to find out how many members of a rugby club have their nipples pierced (to pick a random example). The problem is, the club has 200 members and you don’t want to ask them all to take their shirts off. Therefore, you select a sample of 20 of them guys and ask them to bare their chests. After some friendly bantering they agree, and then it appears that no fewer than 15 of them have their nipples pierced, so you conclude that the majority of players in the club likely have undergone the slightly painful (or so I am told) aesthetic enhancement.

The problem is, there is a chance that you’re wrong. There is a chance that due to sheer coincidence you happened to select 15 pierced pairs of nipples where among the full set of 200 members they are very much the minority. For example, if in reality out of the 200 rugby blokes only 30 have their nipples pierced, due to sheer chance you could happen to pick 15 of them in your sample of 20, and your conclusion that “the majority of players in this club has them” is wrong.

Now, in our research, there is no real way around this. Therefore, the convention among academic researchers is that it is ok, and you can claim your conclusion based on only a sample of observations, as long as the probability that you are wrong is no bigger than 5%. If it ain’t – and one can relatively easily compute that probability – we say the result is “statistically significant”. Out of sheer joy, we then mark that number with a cheerful asterisk * and say amen.

Now, I just said that “one can relatively easily compute that probability” but that is not always entirely true. In fact, over the years statisticians have come up with increasingly complex procedures to correct for all sorts of potential statistical biases that can occur in research projects of various natures. They treat horrifying statistical conditions such as unobserved heterogeneity, selection bias, heteroscedasticity, and autocorrelation. Let me not try to explain to you what they are, but believe me they’re nasty. You don’t want to be caught with one of those.

Fortunately, the life of the researcher is made easy by standard statistical software packages. They offer nice user-friendly menus where one can press buttons to solve problems. For example, if you have identified a heteroscedasticity problem in your data, there are various buttons to press that can cure it for you. Now, note that it is my personal estimate (but notice, no claims of an asterisk!) that about 95 out of a 100 researchers have no clue what happens within their computers when they press one of those magical buttons, but that does not mean it does not solve the problem. Professional statisticians will frown and smirk at the thought alone, but if you have correctly identified the condition and the way to treat it, you don’t necessarily have to fully understand how the cure works (although I think it often would help selecting the correct treatment). So far, so good.

Here comes the trick: All of those statistical biases are pretty much irrelevant. They are irrelevant because they are all dwarfed by another bias (for which there is no life-saving cure available in any of the statistical packages): publication bias.

The problem is that if you have collected a whole bunch of data and you don’t find anything or at least nothing really interesting and new, no journal is going to publish it. For example, the prestigious journal Administrative Science Quarterly proclaims in its “Invitation to Contributors” that it seeks to publish “counterintuitive work that disconfirms prevailing assumptions”. And perhaps rightly so; we’re all interested in learning something new. So if you, as a researcher, don’t find anything counterintuitive that disconfirms prevailing assumptions, you are usually not even going to bother writing it up. And in case you’re dumb enough to write it up and send it to a journal requesting them to publish it, you will swiftly (or less swiftly, dependent on what journal you sent it to) receive a reply that has the word “reject” firmly embedded in it.

Yet, unintended, this publication reality completely messes up the “5% convention”, i.e. that you can only claim a finding as real if there is only a 5% chance that what you found is sheer coincidence (rather than a counterintuitive insight that disconfirms prevailing assumptions). In fact, the chance that what you are reporting is bogus is much higher than the 5% you so cheerfully claimed with your poignant asterisk. Because journals will only publish novel, interesting findings – and therefore researchers only bother to write up seemingly intriguing counterintuitive findings – the chance that what they eventually are publishing is BS unwittingly is vast.

A recent article by Simmons, Nelson, and Simonsohn in Psychological Science (cheerfully entitled “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant”) summed it up prickly clearly. If a researcher, running a particular experiment, does not find the result he was expecting, he may initially think “that’s because I did not collect enough data” and collect some more. He can also think “I used the wrong measure; let me use the other measure I also collected” or “I need to correct my models for whether the respondent was male or female” or “examine a slightly different set of conditions”. Yet, taking these (extremely common) measures raises the probability that what the researcher finds in his data is due to sheer chance from the conventional 5% to a whopping 60.7%, without the researcher realising it. He will still cheerfully put the all-important asterisk in his table and declare that he has found a counterintuitive insight that disconfirms some important prevailing assumption.

In management and strategy research we do highly similar things. We for instance collect data with two or three ideas in mind in terms of what we want to examine and test with them. If the first idea does not lead to a desired result, the researcher moves on to his second idea and then one can hear a sigh of relief behind a computer screen that “at least this idea was a good one”. In fact, you might only be moving on to “the next good idea” till you have hit on a purely coincidental result: 15 bulky guys with pierced nipples.

Things get really “funny” when one realises that what is considered interesting and publishable is different in different fields in Business Studies. For example, in fields like Finance and Economics, academics are likely to be fairly skeptical whether Corporate Social Responsibility is good for a firm’s financial performance. In the subfield of Management people are much more receptive to the idea that Corporate Social Responsibility should also benefit a firm in terms of its profitability. Indeed, as shown by a simple yet nifty study by Marc Orlitzky, recently published in Business Ethics Quarterly, articles published on this topic in Management journals report a statistical relationship between the two variables which is about twice as big as the ones reported in Economics, Finance, or Accounting journals. Of course, who does the research and where it gets printed should not have any bearing on what the actual relationship is but, apparently, preferences and publication bias do come into the picture with quite some force.

Hence, publication bias vastly dominates any of the statistical biases we get so worked up about, making them pretty much irrelevant. Is this a sad state of affairs? Ehm…. I think yes. Is there an easy solution for it? Ehm… I think no. And that is why we will likely all be suffering from publication bias for quite some time to come.