The Trains Run On Time
Or, I Don't Need to Learn Agentic AI Now, Right?
Written by Peter Kaminski, 2026-03-02
♡ Copying is an act of love. Please copy and share.
License: CC-BY 4.0 (Creative Commons Attribution 4.0 International)
I'm on a mailing list with smart folks, a group that's been together for decades. We're debating what today's agentic AI can actually accomplish, and whether recent claims are substantive or hype. My first post on the topic was a bit of a jumble. Here's an attempt to be more clear.
If you'd like firsthand exposure rather than metaphor, I'm teaching a small-group, six-week, hands-on course called Agentic AI with Pete. The next cohort starts March 10, 2026.
Dear Smart Folks,
I'm going to give it another try, I think I can do better than I did the last time. Thanks for your patience.
I am not trying to convince nor convert anyone, but I am trying to help those who I care about, and more broadly, anyone who will listen. And frankly, I am also practicing my tone and delivery in a safe-ish space with people who have known me for decades, and who can be quite opinionated and won't be afraid to give me frank feedback.
I am evangelizing. I bring news from the near future of a rapture, one that will deeply and broadly affect humans and humanity. I believe the news is mostly good, but there is some potential that it is also terrible. It is probably a mixture of both. To be crystal clear, I mean "good" (and I'm afraid, potentially terrible) for humans, and humanity. And in a nod to John Perry Barlow, Humanity Itself.
One of the metaphors I have is of a train. It is leaving the station now. It is moving and accelerating. This particular train is easy for some to board, and impractical for others to board. I hop on and off the train, trying to help more people board. I feel like I've practiced the jumps and holds well enough that I can help many of "my" people — the way I say it, knowledge workers who are proficient enough with their computers to understand "files" and "folders," and can stomach using Obsidian, a particular software that's mostly friendly and a little bit arcane and used by millions for "personal knowledge management" in the before times — to board this train.
No one has to board this train. There will be others in the months and years ahead. The next trains will be easier to board. At some point, to live in the future, nearly everyone will board a train.
The thing is, the next trains and their affordances will be designed by the humans and advanced cognitive tech on the train that is leaving now. Let me tell you, the cognitive tech (CT) of February 2026 is qualitatively different from the CT of November 2025. The CT of November 2025 is a big step up from the CT of mid-2025. Which was a huge leap up from the CT of mid-2024.
As I draw a line into the past of CT, we might consider the line into the future. If there was a jump between November 2025 and February 2026, and a big jump between 2024 and 2026, what do you think the rest of 2026 holds, or 2027?
In this room, we are all futurists, so I won't presume to guess an answer for you. You're well-equipped to do that for yourself. I am also sure you are used to the math of futurecasting and can do scenario-based extrapolations from the slim data I've just offered.
So I will try to repeat what I've said in previous messages as clearly as I can: If you haven't used Claude Code with Opus 4.5+ (or some better system) for, say, 20 hours, you aren't looking at the same train that I am.
For context, I've worked full time on knowledge work — a lot of coding, but also a lot of NON-coding — with Claude Code since February 24, 2025. Hundreds if not thousands of hours spent working with cognitive tech that I consider to be roughly equivalent to a human grad student or a senior software developer in being able to reason and reflect and make self-determining judgments about knowledge work.
You probably know the saying, "you'll see it when you believe it." (Read that carefully, it's a clever inversion.)
I know that we who have been enraptured and who come back with little missives about our experiences sound kind of crazy. I know that evangelizing (telling someone generally good news) sounds very similar to proselytizing (trying to convert someone). I know what you hear is hard to believe, and therefore, hard to see.
We here in this room are especially sensitive to AI proselytizing, because we've been waiting (with some hope, and some fear) for the "overnight" success of CT for decades. Surely today is not the day, after thousands of dawns where today was not the day?
I shrug my shoulders. All I want to say is that if you haven't tried Claude Code with Opus 4.5 / 4.6, or something more powerful, for a few dozen hours, I think today is the day you should. Let me know if you want advice on catching today's train, which is still a little tricky to board, but gives you a seat and a say in the trains of the future.
I wrote all of the above because I was inspired to share this little missive from the future:
"If you ever want to see a really interesting AI thinking trace, push it really hard on literature or poetry suggestions."
(Mollick is worth following. BTW, literature and language is a favorite kind of test knowledge work for me, too.)
I knew that if you didn't believe in the future, you wouldn't be able to see what I'm trying to show, so I wanted to provide a warm start, at least. Maybe it will seem trivial or dumb to you, one anomalous signal that doesn't prove anything. Okay, no problem — maybe you're right!
On my way to find that bookmark, I ran into this tweet, which is equally striking:
"I was curious what would happen if two Claude Codes could find each other and collaborate autonomously."
(unlike Mollick, I'm not familiar with Papailiopoulos at all, but this work seems sound)
Do these make sense if you're not on today's train? Perhaps not. But at least I shared them, hopefully with enough context that, if they sound stupid or weird or trivial to you, you'll be curious enough to explore.
Coda
C., thank you for your clear articulation about your concerns about IP.
For better or worse, I've never made a living off copyright, and in fact, have spent a couple decades trying my best to give stuff away (and to make up revenue based on volume, lol!).
I sympathize, definitely. Perhaps I don't empathize as much as I should. I say that for context, not as a swipe at me or you or anybody else.
But my top-line thoughts:
- You, of all people, should get on this train now! Not because you like or dislike the way it was built, but because the people on this train will have outsized agency in determining the architecture of all future trains. (Plus, it turns out agentic AI provides you with executory superpowers in battling against the evil you see; while holding your nose, you can also turn its potential for evil against itself in the fight for good!)
- I wish LLM training data was a public good / public commons for all humankind. It troubles me that the best training data is held by corporations rather than humanity. But, path dependence led us here. We can't change the past, but maybe "we" can bend the arc of the future to the light. (Reminder, see #1.)
- Every time a text-extrusion LLM spits out a copyright violation (or an image-extrusion generator spits out a trademark violation), and a human publishes that copyright or trademark violation, the LLM provider should be partially liable for said copyright or trademark violation. (Likewise the human; it's not really a copyright violation until it was published, and the LLM didn't "publish" it.) (In the future I guess we'll say something like "entity," "human or agent," because agents will be publishing alongside humans.)
- Compare a library and an LLM. Simplistically (I know it's more complicated now with the publishers etc., but conceptually), the library has purchased one copy of Author X's book. Many people can check that book out, read it, take notes, return the book, for free. If readers want closer access to the author — more timely, more interactive — they find the author and their other offerings, and that's where the author makes their real money. How are LLMs different?
- Compare a professor and an LLM. The professor bought one copy of Author X's book, decades ago. Read it, loved it, treasured it, recommended it to others. She teaches a course which includes significant conceptual derivations from the book, being careful not to plagiarize nor violate the author's and publisher's copyrights while doing so. Yet, thousands of students are enriched from the lessons conveyed by that purchase of a single book. How are LLMs different?
- Copyright law, trademark law, the concepts of copyrights and commons, even credit for authorship; none of them have kept up or fared well with the advent of AI training. We need to have smart, thoughtful public discourse about how humanity will live with the coming age of cognitive tech and its human masters and the evolving emergent properties of humans, humanity, and cognitive tech working together.
In community,
Pete