An IOI member sent in a great question about the possible sources of bias that can affect an analysis. It’s no wonder that it is a good question, the member, Walter W., is a superforecaster from Philip Tetlock’s Good Judgement Project.


Question

I appreciate the depth of thought that went into the report on the possible effects of a Trump presidency. There are many points that make logical sense if Trump’s current policy framework is implemented. “If” is the qualifying word in the chain of logic and where a Bayesian update from time to time might come in handy. I might be susceptible to confirmation bias on this one: I hold similar thoughts on part of what you have presented.

The data that helps support the narrative in a meaningful way is where your report stands out. Thanks! I really enjoyed it. Also, the link to Dalio is quite interesting as well.

Is there a tendency for analysts to create an anchor on their work due to the act of writing a report? Meaning, the report is a big picture narrative written at a certain point in time. It might be easy to anchor to the narrative simply because it is in writing.

Reply

Hi Walter,

Thanks for the excellent mail and comments! I think I’ll address all of them completely in this, but I’m going to split up my answer a bit differently than how you framed your question.

Report Writing

It takes me a long time to write a report because I find that the writing process is essentially an extension of the research process. I’m sure you’ve found this in your own work; after studying something for awhile, you had a pretty good idea of what is going on. If asked to write a report about that, you’d write what you were thinking. This creates the issue of basing a report on opinion rather than evidence. Our minds are usually processing information on a subconscious level, and that subconscious level has a lot to do with us having a “pretty good idea”. Unfortunately, that subconscious level can also trick us through various behavioral quirks. So if we just simply write after we have a good idea about something, we can miss things, fail to consider counterpoints, etc.

What I meant about making the writing process part of the research process is that when I am writing, essentially what I am doing is writing down an hypothesis, then going back and saying “Okay, you’ve said this. Do you have any evidence to support it?” Sometimes, I will have gathered the evidence as part of developing my opinion in the first place, but sometimes, I have to go out and collect and analyze data. Often times in that process, I’m forced to admit that my original contention was wrong, so I need to go back and rethink the original premise.

Testability

In this whole process, what I’m aiming for is to have a number of observable, testable hypotheses that I can go back and check midway through the “experiment” as you’ve suggested. I think the worst way to write a report is to collect anecdotes, tie them into a narrative, and base a conclusion on the narrative one has created for themselves. What I would rather do is write a series of observations that are measurable and quantifiable, and make a prediction based upon those observations. Once done, I’ll pull those predictions together into an overall prediction.

For companies, this process is a relatively simple thing – there are only a very small number of factors that affect the value of a company, so if you can focus in on those, make some sensible predictions for each one, then see what the effects are if each of those interact with the others in a certain way (i.e., valuation scenarios).

With a big macro forecast like this Trump piece, it is more difficult to do that, simply because there are a very large number of interacting effects. What I’ve done, in my own mind and in that Trump piece is to categorize and generalize. In a general sense, all of what Trump is doing can be boiled down to

  1. Effect on GDP growth,
  2. Effect on personal income growth, and
  3. Influence on the environment from an expectations perspective.

The problem is that each of those things interact in non-linear, non-straightforward ways and will express themselves over different time horizons. Some policies will have a near-term positive effect, but a long-term negative one.

The Carrier factory decision is perfectly emblematic of that, of course. Short-term, it is a real win for those 800 families. However, it also raises expectations for presidential influence on future corporate asset allocation and supply chain management decisions and this is probably a negative. Will President Trump get involved every time a factory announces a closing? What about if a company that is looking to build and staff a new facility? Will they face some sort of regulatory retribution for making a decision to locate manufacturing operations in Mexico or Taiwan? Drawing this out, what if corporations realize that the system can be gamed. They make a public announcement that they are moving a facility to Mexico knowing that the purpose is to gain tax concessions. Tax concessions on a large scale might start to impinge upon the ability for the municipality to fund roads and other critical infrastructure. So on and so on.

In the end, what I’m trying to do with the Trump analysis is to categorize future news items in those terms and see what evidence each one provides for or against my scenarios. I’m happy that I’ve got this framework to analyze information, but unhappy that, because of the nature of these inputs, it is not very testable.

Bias

This is a very difficult thing to deal with for a professional, public forecaster. I wrote about some of these issues in my recent piece analyzing Damodaran’s valuation of Valeant, so do have a read through that article if you haven’t already.

In essence, a professional forecaster is investing in each one of his or her forecasts using the most substantial currency available – human capital. Professional forecasters generate income through their work, so by forecasting badly, there is the threat that the income stream will be turned off. This threat is a powerful motivator and can give rise to some very powerful behavioral biases. It is hard to work around these, but being aware of them is the first step. That was the big thing that surprised me so much about Damodaran’s analysis is that he was providing nearly textbook examples of anchoring, prospect theory effects, and false precision, without recognizing or mentioning their influence at all.

I like to think that my understanding of these effects gives me an ability to manage them, but I also am very aware of how powerful these effects are subconsciously, so am sure that they are leaking into my analyses somehow. My goal is to allow as small of a trickle as possible and understand where those trickles are.

Have I covered your question? It was a really interesting one for me, so I’ve written rather too much, perhaps! Hopefully, you’ll find something in it helpful.

All the best,
Erik