How Much Should We Tell People When We Use AI?

April 19, 2026 • 6 minutes to read

I use AI a lot. If you’ve been reading this blog, you know that already. A few weeks ago, I wrote about running my business from a terminal window. That post doesn’t do justice to just how much of my business life is managed through iTerm and Claude. And, of course, Brad has been writing about building CompanyOS - the system that powers a lot of this. I’m all in on this stuff, and I’m not subtle about it.

But something happened recently that made me think about an aspect of AI use that I hadn’t considered carefully enough.

The Deck Review

Like most VCs, I get ten or more new pitches a day. The vast majority aren’t a fit, and I try to send a kind note back when they’re not. For the ones where someone asks me for feedback on their deck - which happens more often than you’d think - I used to have two choices. Spend 30 minutes writing detailed feedback, which I rarely had time for. Or send back a “thanks but I don’t have the bandwidth” note, which felt dismissive when someone was genuinely asking for help.

So I built a third option. I trained a skill inside CompanyOS specifically for reviewing pitch decks. It’s not a generic “summarize this PDF” prompt. I fed it my feedback on a few dozen actual decks, taught it what I look for as an investor - narrative clarity, whether the metrics are stage-appropriate, how the financial model holds together, what questions I’d ask in a partner meeting. When I run a deck through it, the output reflects how I actually think about these things. It’s me, at scale.

A few weeks ago, someone asked me for feedback on their deck after I’d passed on the deal. I ran it through the skill, reviewed the output, made a few edits, and sent it along. Substantive stuff - what was working, what wasn’t, specific suggestions on a few slides.

The response I got back was unexpected. The founder told me he’d run my email through an AI detection tool. It flagged the message as likely AI-generated. And he asked me point blank - is this actually your opinion, or did AI write this?

The Irony

It was a valid question (and he was correct - as I explain below, I heavily relited on AI in my response to him). But here’s the thing that made me laugh. He zeroed in on one specific paragraph about a slide that needed work. That paragraph was one I’d written myself. I’d noticed the slide when I was screenshotting the deck to load into Claude (it can’t read Docsends, sadly) and added my own note about it. The rest - the AI-generated part - he didn’t focus on (although I’m sure that’s in part what caused the note to be flagged by whatever AI detection tool he used).

But his broader question was fair. And it stuck with me.

I wrote him back and explained what I’d done. That yes, AI drafted the initial feedback. That I’d spent significant time training the skill on my actual thinking. That I review and edit everything before sending. And that a year ago, his alternative wouldn’t have been a hand-crafted email from me - it would have been no substantive response at all.

The Podcast Conversation

The next day I was on a podcast with George Samuels, who a a deep thinker about the intersection of AI and human behavior, and I told him this story. We ended up having a long conversation about something that I think a lot of people are navigating right now - the etiquette of AI in professional communication.

George’s suggestion was simple: just be open about it. Not in an apologetic way. Not with a disclaimer that undermines the content. But with a straightforward acknowledgment - something like “I used AI to help draft this, but I’ve trained it on how I actually think about this topic and I review everything before sending.”

That framing resonated with me because it addresses the real concern head-on. People aren’t worried about AI because it’s AI. They’re worried that what they’re reading doesn’t represent the thinking of the person who sent it. That it’s generic filler, not real engagement. Transparency solves that problem. It says: yes, AI was involved, and here’s why you should trust the output anyway.

The Other Side

Around the same time, I had an exchange with my _Capital Evolution _co-author Elizabeth that got at this from the opposite direction. She asked me if I had been using AI to respond to her in our interactions (she was worried that I had) and flagged a specific text conversation where she thought my responses felt AI-generated. They weren’t. But more than that one exchange, she told me she’d started to wonder more generally whether it was always me on the other end of our conversations.

It was more of a feeling but one that I think is quite valid - in this case not by my actual responses to her, but to an world in which people are increasingly using AI to aid (or in some cases substitute for) their work. This is especially concerning to Elizabeth, who works in an industry that’s being significantly disrupted by AI (by some estimates 50% of content is now being generated by AI), and I think that sensitivity was coloring how she read everything I sent. When you know someone is deep into AI tools, it’s natural to start questioning whether any given message is “really them.”

This is another argument for being explicit about when you are and aren’t using AI. If you’re transparent when AI is involved, it’s easier for people to trust that the times you don’t flag it are genuinely just you. Without that transparency, the suspicion bleeds into everything - even the stuff that’s 100% human. It’s also a reminder that AI is best used in many instances as an aid and not a substitute. Context matters, of course. But so does taking the time to make sure that things that go out under your name (this blog, as an exmple - which was drafted by Claude but then significantly revised by me) actually reflect YOU.

What I Actually Think

Here’s where I’ve landed on this.

I’m not going to stop using AI (obviously). The deck review example is a good one - the founder got substantive, personalized feedback on his pitch that reflected 25 years of looking at investment opportunities. His alternative wasn’t a better version of that email written by hand. His alternative was a polite brush-off.

But I am going to start being more explicit about it. Not as an apology. As context. The same way one might say: “I had my analyst pull these numbers” or “my team put this deck together” - because of course you did, and that doesn’t diminish the substance.

The distinction that matters isn’t whether AI was involved. It’s whether the output represents real thinking or is generic fill. I’ve spent real time building and training these tools to reflect how I actually approach problems. That’s meaningfully different from pasting something into ChatGPT and forwarding whatever comes back.

I think we’re heading toward a world where AI involvement in professional communication is assumed, and the interesting question becomes not “did you use AI” but “how well did you use it.” We’re not there yet - clearly, based on the reactions I’m getting. In the meantime, transparency seems like the right bridge.

General Business