Posts

AI Strategy for Non-Technical CEOs: Where to Start

February 9, 2026
Expert Knowledge
25+ years advising non-technical founders on technology decisions(Guided dozens of CEOs through AI adoption as a technical advisor and fractional CTO)
You're a CEO. You know AI is important. Your board is asking about your AI strategy. Your competitors are announcing AI features. Your team is experimenting with ChatGPT. And you don't have a computer science degree. This is one of the most common situations I encounter in my advisory work. Smart, capable CEOs who have built successful companies feel suddenly out of their depth because the technology conversation has shifted under their feet. Here's the truth: you don't need to understand how large language models work to make good AI decisions for your company. You need a framework for evaluating opportunities, a way to assess vendors without getting sold snake oil, and the confidence to ask the right questions. That's what this guide provides. No jargon. No hype. Just practical guidance for making AI decisions when you're not an engineer. For a deeper technical framework, see my AI Strategy for CEOs. For build vs. buy decisions specifically, see Build vs Buy AI. You don't need to understand neural networks. You need to understand three things: AI is a means to an end. It automates tasks, surfaces patterns in data, generates content, and makes predictions. It doesn't tell you which tasks to automate or which patterns matter. Your business judgment does that. The CEOs who get AI right are the ones who start with business problems and then evaluate whether AI is the best solution. The ones who get it wrong start with "we need an AI strategy" and go looking for problems to solve. The most impactful AI implementations in our portfolio companies aren't sophisticated. They're straightforward:
  • Automating repetitive tasks that used to take hours (data entry, report generation, email triage)
  • Analyzing customer feedback at scale to spot trends
  • Generating first drafts of content, proposals, and documentation
  • Improving search and recommendations within existing products
These applications use off-the-shelf AI tools, require no custom model training, and can be implemented in weeks, not months. AI isn't going away, and your competitors are adopting it. Companies that ignore AI don't face an immediate crisis — they face a slow erosion of efficiency and competitiveness as their peers do more with less. But the risk of doing the wrong thing is also real. Spending six months and $500K building a custom AI feature nobody wants is worse than doing nothing. The goal is smart adoption, not fast adoption. Here's the framework I walk CEOs through when they're trying to figure out where AI fits in their business.
AI Opportunity Evaluation
1
Map
Week 1-2
List repetitive tasks across departments
Identify data you already collect
Note where employees waste time
2
Prioritize
Week 3
Score by impact and feasibility
Start with high-impact, low-risk
Ignore anything that needs custom ML
3
Test
Week 4-8
Run 1-2 pilots with existing tools
Measure time saved or quality improved
Get team feedback
4
Scale
Month 3+
Roll out what works
Kill what doesn't
Plan the next round
Walk through each department and ask: "What tasks take the most time, are the most repetitive, and require the least creative judgment?" These are your AI candidates. Common examples:
  • Sales: Researching prospects, writing outreach emails, updating CRM records, generating proposals
  • Customer support: Answering common questions, routing tickets, summarizing conversations
  • Finance: Categorizing expenses, generating reports, forecasting revenue
  • Marketing: Writing first drafts, analyzing campaign performance, personalizing content
  • Operations: Processing invoices, scheduling, managing documentation
For each candidate, ask two questions: Impact: If we automated this or made it 5x faster, how much would it affect our business? (Revenue, cost savings, team capacity, customer experience) Feasibility: Can we do this with existing tools, or does it require custom development?
High FeasibilityLow Feasibility
High ImpactDo this firstPlan carefully before committing
Low ImpactEasy win if time permitsIgnore
Start in the top-left quadrant. Always. Don't build anything yet. Take the top 1-2 opportunities and test them with existing AI tools. Give a team of 2-3 people access to the tool for 4-6 weeks and measure what happens. What to measure:
  • Time saved per week (in hours, not percentages)
  • Quality of output compared to the old process
  • Team adoption — are people actually using it?
  • Cost of the tool vs. time saved
If a pilot doesn't show clear value in 6 weeks, kill it and try the next thing. Don't fall into the trap of "we just need more time" or "we need to customize it." When a pilot works, roll it out. When it doesn't, move on. The entire cycle from mapping to scaling should take about 90 days for the first round, then repeat quarterly. Every software vendor now claims to be "AI-powered." Most of them bolted a ChatGPT wrapper onto their existing product and updated their marketing. Here's how to cut through it. "What specific task does your AI automate, and what was the manual process before?" If they can't give a concrete answer, the AI is marketing, not product. "Can I talk to a customer at my stage who's using the AI features?" Reference calls are the single best vendor evaluation tool. If they can't produce a relevant reference, that's a red flag. "What happens when the AI gets it wrong?" Every AI system makes mistakes. Good vendors have thought about error handling, human review workflows, and accuracy measurement. Bad vendors change the subject. "What data do you need from me, and what do you do with it?" Understand where your data goes, whether it's used to train models, and what happens if you leave the vendor. Data privacy and portability matter. "What does this cost at 10x our current usage?" AI features often have usage-based pricing that scales unpredictably. Understand the cost curve before committing.
  • "Our AI does everything" — no it doesn't
  • Demos that only show cherry-picked examples
  • Pricing that's hidden or "custom" without clear metrics
  • Claims of 90%+ accuracy without explaining how it's measured
  • No ability to review or override AI outputs
Here's a concrete plan you can start this week, regardless of your technical background.
90-Day AI Action Plan for CEOs
1
Days 1-14
Learn and Map
Understand the landscape and identify opportunities
Try ChatGPT or Claude yourself
Map time sinks across departments
Talk to 3 peers about their AI adoption
2
Days 15-30
Evaluate and Choose
Select pilots and set up success criteria
Score opportunities by impact and feasibility
Select 1-2 pilots
Define success metrics
3
Days 31-60
Pilot and Measure
Run experiments with real teams
Launch pilots with 2-3 person teams
Weekly check-ins on adoption and results
Document learnings
4
Days 61-90
Decide and Scale
Commit to what works, plan the next cycle
Roll out successful pilots
Kill underperformers
Plan Q2 AI initiatives
Use AI yourself. Spend an hour with ChatGPT or Claude doing real work — drafting emails, summarizing documents, brainstorming. You need firsthand experience to have an informed opinion. Talk to your team. Ask each department head: "What's the most tedious part of your team's week?" and "Where do you wish you had more bandwidth?" These conversations surface the real opportunities. Talk to peers. Call 3 CEOs you respect and ask what they're doing with AI. You'll get more honest, practical insights from peers than from vendors or consultants. Pick your pilots. Based on your mapping, select 1-2 opportunities that are high-impact, high-feasibility, and have a team willing to try. Choose your tools. For most initial pilots, you don't need enterprise software. Start with:
  • General AI assistants (ChatGPT, Claude) for writing, analysis, and brainstorming
  • Your existing software's AI features (most CRMs, support tools, and marketing platforms have added AI)
  • One or two purpose-built tools for specific use cases
Define success. Before you start, write down what success looks like. "Our support team resolves 20% more tickets per day" is a success metric. "We're using AI" is not. Run the pilots. Check in weekly with the teams using the tools. The most common failure mode is teams trying the tool for one day, having a bad experience, and abandoning it. Expect a learning curve and support people through it. Evaluate results against your success criteria. If a pilot worked, invest in rolling it out more broadly. If it didn't, understand why and either adjust or move on. Then start planning the next cycle. This framework handles 80% of AI decisions a non-technical CEO will face. But some situations genuinely require technical expertise:
  • Building AI into your product. If AI is becoming part of what you sell to customers, you need a technical leader — either a CTO, a fractional CTO, or a technical advisor.
  • Custom model training. If off-the-shelf tools don't solve your problem and you need models trained on your specific data, you need engineering resources.
  • Data infrastructure decisions. If your data is scattered across systems and you need to unify it before AI can be useful, you need technical guidance.
  • Security and compliance. If you're in a regulated industry (healthcare, finance, legal), you need someone who understands both the technology and the regulatory requirements.
For these situations, bringing on a fractional CTO or technical advisor is usually more cost-effective than hiring a full-time technical executive. Your board wants to hear that you have a plan. Here's how to frame it: "We're taking a pragmatic approach to AI." You're focused on applications that drive measurable business value, not chasing trends. "We're running structured pilots." You have a process for evaluating opportunities, testing them with real teams, and scaling what works. "Here are our first results." After your initial 90-day cycle, you'll have concrete data on time saved, cost reduced, or quality improved. "Here's our roadmap." Share your plan for the next quarter — which opportunities you're exploring and what success looks like. This is better than any slide deck filled with AI buzzwords. Boards respect structured thinking and measurable outcomes. You don't need a technical background to make good AI decisions. You need the same skills that made you a successful CEO: the ability to identify opportunities, evaluate them rigorously, and execute with discipline. AI is a powerful tool, but it's still just a tool. Your job is to point it at the right problems, measure whether it's working, and keep your team focused on outcomes instead of technology for technology's sake. Start small, measure everything, and scale what works. That's the entire strategy. If you're a non-technical CEO navigating AI adoption:

Related Articles

Explore more insights on entrepreneurship, AI, and leadership:

Explore More

Dive deeper into related topics and resources:
On this page