How AI Investment Decisions Are Made
Temporal tone, platform fit, and the structures that shape AI deployment
Despite the surge in AI spending, enterprise outcomes remain inconsistent. Only a small portion of AI pilots become tools used at scale, and a widely cited 2024 McKinsey report explained that only 11 percent of companies have adopted gen AI at scale (McKinsey Digital, May 2024). Many organizations deploy AI based on vendor demos, hype cycles, or speculative ROI calculations, without a systematic appraisal of what kind of AI system is being deployed, what level of control is being transferred, and who is making these decisions under what assumptions.
In this post I use my own experience and information from recent reports from top consulting firms, industry outlook reports, and technical papers from AI and business domains to address a blind spot in most enterprise AI strategies. The manager’s decision-making framework before adoption, where investment discretion meets uncertainty, is a key missing point from most conversations. Using a theoretical model from the Journal of Management Information Systems (Manager Appraisal of Artificial Intelligence Investments, by Queiroz et al., 2024), I introduce a structured way to evaluate AI opportunities based on expected business value, and more importantly how managers perceive delegation, risk, autonomy, and time [1].
At the core of this framework is a 2×2 matrix that classifies AI systems along two axes:
Action Autonomy: whether the AI augments or automates human tasks
Learning Autonomy: whether the AI is pre-trained or capable of ongoing self-learning
This yields four types of AI systems, each with distinct implications for control, risk, and business value creation, as shown in Table 1:

Managers’ preferences for these types are not uniform. They are shaped by what Queiroz et al. call “temporal tones”, and is inspired by Emirbayer and Mische’s seminal 1998 work (What Is Agency?) [3]. This is the dominant orientations toward the past, present, or future that influence how managers assess risk, control, and innovation potential:
Iterative tone: favors stable, low-risk AI that complements existing protocols
Practical-evaluative tone: adapts AI choice to present needs, and may tolerate moderate risk
Projective tone: open to high-autonomy, experimental AI with future-looking upside
While the framework introduced in this post is built in theory, it has been shaped and stress-tested through the lived realities of organizations deploying AI at scale, enterprises that have wrestled with task or decision delegation, accountability, and the challenge of integrating autonomous systems into tightly regulated, often risk-averse environments.
The decision-making guidelines laid out in this post reflect lessons learned from across the AI lifecycle, from early-stage experimentation to platform rollouts across global business units, in various sectors including energy. I use various sources to present an accumulated expertise of those who have built and governed agentic systems, who have mapped enterprise capabilities to deployment pathways, and who have faced the constraints of real-time oversight, vendor lock-in, and fragmented toolchains.
For example, I use case studies from closed-loop automation in network infrastructure, to bring in understanding of how autonomy moves from isolated tools to mission-critical workflows, and what happens when that autonomy outpaces governance. Or, from use cases in enterprise-grade ML platforms, I bring in concepts to show that not all tools are equal in their support for traceability, retraining, and controlled delegation. Some platforms are built for transparency and lifecycle control and others are optimized for speed and experimentation, but falter under compliance pressure.
I also use the literature on the maturity gap between innovation and adoption, to capture a sense of when technologies, such as causal AI, embodied agents, or world model–driven simulations, are ready to be entrusted with decision rights, and when enthusiasm must give way to staged testing and oversight.
Finally, I use my own observations, and experiences from almost a year with the Indy Autonomous Challenge competition, during which I had the rare opportunity of working with a group of world class experts in robotics and autonomous systems for an international autonomous vehicle racing, at very high speeds, seldom exceeding human lap records, and in full human-out-of-the-loop autonomy.
These accumulated insights shape the decision-making approach presented in this post. They help the reader build:
A conceptual lens for appraising AI investments based alignment with managerial trust, oversight capability, and decision delegation tolerance, in addition to conventional performance metrics.
A platform-deployment overlay that links different types of AI autonomy to the kinds of platforms best suited to support them, both technically and organizationally.
A governance model that distinguishes between AI that needs human-in-the-loop validation, and AI that demands runtime policy enforcement.
A strategic alignment toolkit, helping decision-makers evaluate when their own risk posture, time orientation, and control preferences match the systems they’re about to implement.
This post is devised for roles such as C-suite leaders allocating budget to black-box systems, risk officers balancing innovation with accountability, AI strategists seeking scalable methods for discerning which systems to build, buy, or block, or even sales people trying to find the right client in their addressable market.
Conceptual Foundation
Most AI investment frameworks begin with capabilities, such as what the system can do, how it scales, what it costs, etc. But these considerations often obscure the deeper, more consequential question of what rights and responsibilities are being transferred from humans to machines, and under what assumptions?
For this section I use the recent theoretical work that centers decision-making not on the AI itself, but on the managerial act of delegation. In this way we can devise a model in which AI investments are viewed as discretionary choices made under uncertainty, shaped by how managers interpret the role of automation, autonomy, and learning in achieving business value. It is important to remember, throughout this post, that the fact that some AI investments have the potential to permanently change the way decisions are made, and diminish the role of human expert, the appraisal framework must be distinguished from any other general-purpose Information Systems (IS) investment appraisal [1].
1. Two Dimensions of AI Autonomy
As explained, the core of this appraisal model is a simple but powerful 2x2 matrix that categorizes AI systems along two dimensions of autonomy into: Augmented–Pretrained, Augmented Self-Learning, Automated–Pretrained, and Automated–Self-Learning categories. The two dimensions of autonomy are defined as:
2. How do managers choose between these types?
The way that Queiroz et al. present the appraisal framework proposes that managers’ preferences are shaped by a factor often ignored in AI decision literature: temporal orientation, or what they call "temporal tone".
Managers bring different orientations to investment decisions, shaped by how they relate to the past, respond to present pressures, or imagine the future. The model identifies three dominant tones:

This table summarizes how a manager’s temporal orientation shapes their AI appraisal logic.
These tones guide how much uncertainty a manager is willing to tolerate, how much control they are willing to cede, and how they interpret the promise of AI.
For example:
A projective CTO seeking to unlock new capabilities in product innovation may favor an automates-self-learns system that adapts to emerging patterns in customer behavior.
An iterative VP of operations responsible for quality assurance may choose an augments-pretrained AI for flagging anomalies, preserving control and traceability.
This framing shifts the conversation from “Can the AI do the job?” to “How will the AI be governed, and under what logic was it selected?”
Keep reading with a 7-day free trial
Subscribe to CrudeCast to keep reading this post and get 7 days of free access to the full post archives.