Tag: Executive Leadership

Executive leadership practices for senior leaders navigating complex technology and people challenges.

  • Rethink Responsibility in the Age of AI: Who Owns the Decision?

    Rethink Responsibility in the Age of AI: Who Owns the Decision?

    The 2018 Uber fatality in Tempe, Arizona was a watershed moment — not because autonomous vehicles were new, but because nobody could answer a simple question: who was responsible? When I reflect on that case, I see it less as a failure of technology and more as a failure of organizational thinking. We had built a system capable of making life-or-death decisions without first deciding who owned those decisions. That gap has not closed. If anything, it has widened.

    Most of the executives I speak with are deploying AI faster than they are building accountability structures around it. That is not a technology problem. That is a leadership problem — and it is the kind that tends to surface only after something goes badly wrong.

    What the Research Shows

    A recent piece in MIT Sloan Management Review uses the Tempe accident as a lens for a much larger argument: that traditional frameworks for organizational responsibility were simply not designed for AI systems. The article surfaces what researchers call a “responsibility gap” — the space between the humans who build AI, the humans who deploy it, and the humans who are affected by it. When an algorithm causes harm, existing structures allow accountability to dissolve across that chain rather than concentrate where it belongs.

    The piece argues that this is not an edge case reserved for autonomous vehicles. It applies anywhere AI is making or materially influencing decisions — in hiring, lending, healthcare triage, content moderation, and beyond. The core finding is direct: responsibility must be redesigned, not just assigned after the fact.

    Why This Changes the Playbook

    Here is what I think most leaders are getting wrong. They treat AI accountability as a compliance exercise — something you hand to Legal or put in a policy document. That approach fails for a structural reason: AI systems do not behave like the tools those policies were written for. A spreadsheet does what you tell it. An AI model operating in a dynamic environment can produce outcomes that no single person designed, anticipated, or approved.

    This creates several second-order problems that boards and executive teams are not yet pricing in:

    • Diffused accountability becomes no accountability. When responsibility is spread across data scientists, product managers, procurement teams, and external vendors, it effectively belongs to no one — until a regulator or a plaintiff’s attorney decides otherwise.
    • Speed of deployment is outpacing governance architecture. Most AI governance frameworks I see are retrofitted onto systems already in production. That is backwards. Accountability structures need to be part of system design, not bolted on afterward.
    • The “human in the loop” assumption is often a fiction. Organizations claim human oversight while designing workflows where the human has neither the time nor the information to meaningfully intervene. That is not oversight. That is liability theater.
    • Reputational exposure is asymmetric. The upside of an AI-driven efficiency gain is incremental. The downside of a high-profile AI failure is existential for trust. Leaders are not weighting these outcomes correctly.

    The question is not whether your AI will make a consequential mistake. It is whether your organization has decided, in advance, who owns that mistake and what they are empowered to do about it.

    What the Tempe case ultimately demonstrated is that ambiguity about responsibility is itself a strategic risk. Courts, regulators, and the public will assign blame regardless of whether your org chart has a clear answer. You want to have made that determination yourself, deliberately, before the moment of crisis.

    Key Takeaways for Leaders

    • Map every consequential AI decision in your organization to a named human owner with real authority to intervene — before deployment, not after an incident.
    • Audit your “human in the loop” claims honestly: if the human cannot realistically override the system, remove that claim from your governance documentation.
    • Treat the responsibility gap as a board-level risk, not an IT or compliance issue — it has direct implications for liability, regulation, and organizational trust.
    • Build accountability architecture in parallel with AI development cycles, not

      Interesting Articles to Read

  • AI Will Only Replace White-Collar Jobs If Leaders Let It

    AI Will Only Replace White-Collar Jobs If Leaders Let It

    Every few months, a new wave of AI capability announcements triggers the same boardroom conversation: which roles are safe, which are not, and how fast should we move? I have sat in enough of those rooms to know that most leaders are asking the wrong question. The real question is not whether AI will replace white-collar workers. It is whether leaders will give it permission to — by hollowing out the human substance from their organizations in pursuit of efficiency.

    That framing is uncomfortable, but I think it is the honest one. The threat is not purely technological. It is organizational, cultural, and ultimately a leadership choice.

    What the Research Shows

    A recent piece from ChiefExecutive.net makes a pointed argument: AI will only displace white-collar professionals at scale if organizations forget what human beings uniquely bring to work. The leaders who will matter most in the age of AI are those who lead in the most distinctly human ways — with empathy, moral judgment, contextual wisdom, and the ability to build genuine trust. The article’s core claim is not that AI is overhyped, but that the leaders who treat humanity as a competitive advantage, not a cost center, will define what survives and what gets automated away.

    The leaders who matter most in the age of AI will be the ones who, unapologetically and radically, lead most like humans.

    Why This Changes the Playbook

    Most leadership teams approach AI adoption as a capability and cost question. How much can we automate? Where can we compress headcount? That lens is not wrong — it is just dangerously incomplete. Here is what I think most executives are missing.

    • Efficiency without judgment creates brittleness. AI optimizes for patterns in historical data. It cannot navigate genuine ethical ambiguity, organizational politics, or the kind of relational trust that holds teams together under pressure. When you strip human layers out of decision-making chains, you also strip out the buffers that catch failure before it compounds.
    • The skills most at risk from AI are not the ones we think. Rote analysis, template-driven communication, standardized reporting — these are already eroding. What remains irreplaceable is the ability to read a room, make a call with incomplete information, and take accountability for consequences. These are leadership fundamentals, not soft extras.
    • Culture becomes a strategic moat. Organizations that invest in psychological safety, mentorship, genuine human development, and values-based decision-making will be harder to replicate than those competing purely on AI capability. The technology is increasingly available to everyone. The humans who use it wisely are not.
    • There is a second-order talent risk that boards are underestimating. If your organization signals — through structure, incentives, or rhetoric — that human judgment is being systematically downgraded, your best people will notice first and leave first. You will be left with those who did not have options.

    I am not arguing against AI adoption. I am arguing that the leaders who treat it as a replacement strategy rather than an augmentation strategy are making a costly long-term bet on the wrong variable.

    Key Takeaways for Leaders

    • Audit your AI adoption decisions for what human capability is being removed, not just what cost is being reduced.
    • Invest deliberately in the leadership behaviors AI cannot replicate — ethical reasoning, relational trust, and contextual judgment.
    • Treat culture and human development as a competitive differentiator, not an overhead line item to be managed down.
    • Watch your attrition patterns carefully — the first people to leave an organization that undervalues human judgment are usually the ones you can least afford to lose.
    • Make your organization’s stance on human-centered leadership explicit, both internally and in how you present to the market for talent.
    • <a href="https://hbr.org/2023/07/how-to-use-ai-

      Interesting Articles to Read

  • Why AI Projects Fail Before They Start, Says Prezi’s CEO

    Why AI Projects Fail Before They Start, Says Prezi’s CEO

    Every executive I speak with right now has a version of the same story: we invested in AI, we ran the pilots, and somehow the results didn’t justify the hype. The instinct is always to blame the technology — wrong model, wrong vendor, wrong timing. I’ve come to believe that instinct is almost always wrong.

    The real failure happens before a single line of code is written. It happens in the room where leaders define what they’re trying to solve — or more precisely, where they fail to define it with any real precision.

    What the Research Shows

    Peter Arvai, CEO of Prezi, makes an argument in this Inc. piece that cuts through a lot of the noise: AI projects don’t fail because the technology is immature. They fail because organizations bring poorly formed questions to capable tools. The model isn’t the bottleneck. The thinking is. Arvai’s position is that most companies are asking AI to do things before they’ve clearly articulated what success even looks like — and then they’re surprised when the output doesn’t move the needle.

    This isn’t a technical critique. It’s a leadership critique. And coming from someone running a company whose entire product is built around how humans communicate and structure ideas, it carries particular weight.

    Why This Changes the Playbook

    Here’s what I think this really means for organizations: we’ve been treating AI adoption as an engineering problem when it’s actually a strategic clarity problem. The companies getting real returns from AI aren’t necessarily the ones with the biggest budgets or the most sophisticated infrastructure. They’re the ones who spent time upstream — defining the decision they’re trying to improve, the workflow they’re trying to compress, the outcome they’re willing to be held accountable for.

    Most leaders get this wrong in a few predictable ways:

    • Vague mandates produce vague results. “Use AI to improve customer experience” is not a brief. It’s a wish. Teams will build something, demo something, and then nothing changes operationally.
    • Question-framing gets delegated to the wrong people. Technical teams are excellent at answering questions but were never trained to interrogate whether the right question is being asked in the first place.
    • Activity pressure overrides strategic discipline. There’s enormous pressure to show AI activity — pilots, announcements, budget allocations — which creates incentives to start building before the thinking is done.
    • Success metrics get bolted on after the fact. Teams end up measuring effort and output rather than actual business impact.

    The organizations that will win with AI are not the fastest adopters. They are the most disciplined questioners.

    The second-order effect of this is significant. If your AI projects are consistently underdelivering, your organization starts to develop learned helplessness around the technology. The cynicism compounds. Talented people who could drive real transformation start to disengage. You end up spending more on AI and trusting it less — a genuinely dangerous position to be in as the competitive landscape shifts.

    Key Takeaways for Leaders

    • Before any AI initiative gets a budget, require the team to articulate the specific decision or outcome it will improve — in one sentence.
    • Treat question formulation as a senior leadership responsibility, not something to delegate entirely to data science or IT.
    • Audit your current AI projects against concrete business metrics, not activity metrics like “models deployed” or “prompts processed.”
    • Build in a pre-mortem practice: before launch, ask what a failed version of this project looks like and whether you’d actually know the difference.
    • Recognize that AI fluency for executives means knowing how to frame problems precisely, not knowing how the models work under the hood.
  • Why Your Brand Strategy Starts With Your Employees, Not Your Customers

    Why Your Brand Strategy Starts With Your Employees, Not Your Customers

    Every leader I know has spent real money on brand strategy. Agencies, workshops, brand guidelines thicker than a dictionary. And then I watch those same leaders undermine the entire investment the moment they walk into a Monday morning meeting. The brand your customers eventually experience is assembled, piece by piece, inside your organization long before any campaign goes live.

    This is the insight most executives intellectually accept and operationally ignore. I’ve been guilty of it myself. We treat brand as a marketing problem when it is, at its core, a leadership problem.

    What the Research Shows

    A recent piece on Inc.com makes the case plainly: employees encounter and internalize your brand long before any customer does. The argument is that leadership behavior is the primary signal employees use to decode what the organization actually values — not the values poster on the wall, not the all-hands presentation, but how their manager behaves under pressure. Culture is downstream of leadership conduct, and brand is downstream of culture.

    The practical implication is significant. If your people do not believe the brand promise, they will not deliver it. You cannot train or incentivize your way around that gap. The authenticity problem starts at the top and travels down through every customer-facing interaction your organization produces.

    Why This Changes the Playbook

    Most leaders frame brand alignment as a communications challenge. Get the messaging right, cascade it properly, reinforce it in onboarding. That framing is wrong, and it is expensive to be wrong about it. Here is what I think this really means for organizations trying to close the gap between their stated brand and their delivered experience:

    • Leadership behavior is your highest-leverage brand channel. Every decision a senior leader makes in a meeting, in a crisis, in a performance review — that is brand communication. It carries more weight with employees than any internal campaign ever will.
    • Most organizations measure brand health externally and almost never measure it internally first. If you are not regularly asking employees whether they believe the brand promise, you are flying blind on the most important leading indicator you have.
    • The employee experience gap tends to show up in customer service quality, not immediately in satisfaction scores. By the time NPS drops, the cultural erosion has been underway for months or years.
    • Middle management is the critical leverage point that most brand initiatives skip entirely. Senior leaders set the tone; middle managers translate it into daily reality. A misaligned middle layer will quietly hollow out any brand investment.
    • Recruiting and retention are brand strategy, not just HR functions. Who you hire, who gets promoted, and who you let go are the clearest signals your organization sends about what it actually values.

    The brand your customers experience is only ever as strong as the culture your employees inhabit. Fix the inside, and the outside takes care of itself.

    The second-order effect here is competitive. Organizations that treat internal brand alignment as a strategic priority build a compounding advantage. Culture is genuinely hard to replicate. A competitor can copy your product features or your pricing within a quarter. They cannot copy a decade of consistent leadership behavior that has built real organizational trust.

    Key Takeaways for Leaders

    • Audit your own leadership behavior before your next brand initiative — your conduct is already communicating a brand position, whether you intend it or not.
    • Add an internal brand health metric to your existing measurement framework and review it with the same rigor you apply to customer-facing scores.
    • Treat middle management alignment as a prerequisite for any brand transformation effort, not an afterthought.
    • Close the gap between your stated values and your promotion and compensation decisions — employees notice the delta immediately.
    • Brief your HR leadership as a strategic partner in brand execution, not just a support function.

    Interesting Articles to Read