Using AI for Better Decision-Making

If you’ve been working in corporations for any length of time, you probably have realized that many of the decisions that executives make are bad, their negative nature is generally covered up, and when problems result, a scapegoat is punished, while the failed decision-maker is rewarded and promoted. 

To suggest that corporations unfairly treated employees and executives would be an understatement.  

You probably have also been in meetings that operated under the informal rule that decisions are made by the least knowledgeable and either most senior or loudest person. Which goes to the heart of why so many decisions are bad ones. Corporations, way too many of them, seem to be taking on Dunning-Kruger concepts, such as decision-makers overestimating themselves, as operational requirements rather than mistakes to avoid.  

And it isn’t just companies. Most politicians seem to be Dunning-Kruger poster children. But what if we could fix this or at least mitigate this inefficient behavior? 

The problem with corporate decision-making

One of the causes for the bad decisions I’ve been part of and have been brought in to review after the fact is that the decision-making process in most companies isn’t focused on assuring the decisions are informed or significantly influenced by the most qualified employee. 

When a mistake is made, the initial focus isn’t on understanding the cause of the mistake, but on finding someone to throw under the bus for it.  

This lack of focus on decision quality and excessive focus on blame likely directly relates to why there are so few companies that have lasted a century or even 50 years. Eventually, companies build up a critical mass of bad decisions and go out of business, with the irony being that often the folks that made those bad decisions end up very wealthy, and the folks that were impacted by them unemployed.  

Fixing the corporate decision-making process and AI

After speaking with Ford’s now ex-CEO on this topic, I think part of what needs to be done is a policy change that kicks in when there is a failure. That change should be an immediate causal analysis of what happened, why did it happen, and how can it be avoided. 

Ironically, the process that is in place, which seems to immediately jump to ‘get the perpetrators,’ is counterproductive, because you’d think that the one person who likely learned from the mistake is the person who made it. 

When I’ve been brought in to analyze a mistake, it has generally come down to the decision not being adequately backed up with research, underfunding of the analysis that led to the decision, or the decision being made by someone unqualified to make it.

This is where AI could come in. It could rank the people who, individually or collectively, are involved in a major decision and report the probability, based on the qualifications and past decisions by the decision-maker, of its success. If the result comes in very low, then the decision is suspended pending someone more qualified reviewing it.  

This would not only provide an early warning and the ability to correct bad decisions before they result in bad outcomes, but it would also be a forcing function for those who are controlled by Argumentative Theory — those who basically must win every argument, because they value status over success — to either improve themselves or find other employers. This should improve the quality of the decision-makers over time and thus the quality of their decisions.  

New decision-making

One of the most frustrating experiences I had as a competitive analyst was repetitively having meetings with Ph.Ds. out of corporate who not only called my teams idiots but would then, after a ton of work, grudgingly agree we were right. 

Then they’d go back to corporate, get replaced, and we’d start the process over again. This happened three times before they “fixed” the problem, got rid of my team, brought the disputed product out, and put the division out of business.  

Decisions should be influenced and made by people who are most qualified to make them. With AI and policy changes, we likely could build far more companies that could last centuries. But only if we sacrifice our blame-first policies and put our efforts into assuring the quality of the decisions rather than unreasonably protecting or punishing the decision-makers — or especially their scapegoat proxies.  

Similar articles

Get the Free Newsletter!
Subscribe to Data Insider for top news, trends & analysis
This email address is invalid.
Get the Free Newsletter!
Subscribe to Data Insider for top news, trends & analysis
This email address is invalid.

Latest Articles