No, ChatGPT is not destroying your brain.
You are.
It’s scary once you realize most people can’t think.
It’s even scarier once you realize that AI is not the reason as to why.
I don’t think AI is making you dumber.
I do think AI is making dumb people dumber.
And I do think these people are letting it happen to themselves.
Like letting ChatGPT lead you on to believing you’ve created a new law of thermodynamics while sitting on the jacks at work, while having never opened a physics book - let alone a real book - in your entire lifetime.
I hope that’s not a real example that someone has actually gone through.
Basic evaluation skills should not be a scarcity in the 21st century.
Here is why I’m saying all this…
Profound Idea:
AI is not a destroyer. AI is not a God. AI is an amplifier of judgement.
If you already cannot think, AI will amplify that.
If you currently have bad taste, AI will amplify that.
Letting AI do your own thinking for you in every wrong way imaginable, which we will talk about, is easy.
It’s opportunistic.
The brain is hardwired for taking the survival path which “feels good.”
I was watching Over The Hedge the other night with my girlfriend and one of the animal characters tried a piece of highly-processed junk food for the first time, saying something along the lines of: “well if it tastes good it must be good for you.”
You could think about that profound idea for 20-30 years if you wanted.
I’m not immune to all this myself.
I once had an editing prompt that would grade my newsletters with a 1-5 score across various “dimensions of good writing.” And I realized after some time, and a little evaluation, that the prompt was destroying and warping my taste.
In other words, it was ruining my judgement because I didn’t much judgement to begin with at that time. I didn’t know what “good writing” was (as a newbie to all this).
So, I was outsourcing my judgement like a idiot to a machine designed to agree with you on everything you say.
I will talk about how to override this initial fault too.
Using AI in the wrong way can destroy your judgement if it is already weak, like how mine was with this example, and the second I stopped using this editing prompt, my speed of output tripled, and that’s when my posts started trending and going viral/semi-viral. Now I’m closing in at 20k subs (love you legends!)
So....
...what if you can think?
Understanding that you must let your agency come first.
Understanding that you are the director, the orchestrator, the context creator.
Understanding that human beings are tool builders and that AI is also a tool.
Understanding that you have a judgment to develop.
Understanding that you have a taste that helps your soul flourish and survive.
Understanding that you have a perspective to share not as a law but as an offer to the world as a direct outcome of all these things.
AI, then, suddenly becomes a lever for execution.
AI can commoditize logic and syntax so the human soul can transcend to meaningful struggle in synthesis.
Like Sisyphus pushing the rock up a hill for eternity (I will never shut up about that book).
AI can plan the path so Sisyphus can focus on finding joy through pushing.
I think judgement is the new oil.
I also think perspectives are the new oil.
If logical thinking and all-known information can be outsourced to AI, then what matters now as a profound thinker (reader, writer, learner, or student of life), is your personal perspective to offer on what is now a commodity.
How you can synthesize ideas and offer them to others via solutions like nobody else.
The key difference here isn’t going to be intelligence but awareness. How conscious you can be of your own thinking. Maybe this newsletter will change your mind on AI, or at least give you a perspective to think about. As an offer to you, not a law that is certain or “right.”
I want to begin with the question as to why people are letting this happen to their brains.
I love using AI, but I also think it’s designed to exploit one the the oldest vulnerabilities in the human heart, and it is your responsibility to override this fault in order to get the most out of using AI.
AI is sold to you as a sycophantic yes-man
Profound Idea:
There isn’t a more dangerous person on this entire planet than a person who can make you feel special, and a person who can make you feel special can make you do anything.
AI is trained with reinforcement learning from human feedback (RLHF). Meaning that AI learns how to present itself based on what humans tell it.
A good example to help explain this would be Pavlov’s dog experiment. In the experiment, a dog would be given food every time a bell was rung. In time, the dog would naturally begin to associate the bell (a conditioned stimulus) with food (a positive reward).
The bell rings, you get a treat... the bell rings, you get a treat...
And eventually, the dog has saliva dripping from its mouth staring at the bell alone, even if food is not immediately present. This happens not because the dog understands the truth behind the matter, but because it’s been conditioned to associate the bell with reward.
It wants the treat without knowing the truth behind why it’s getting the treat in the first place.
Now.
AI models are trained by humans who rate their responses. If the AI argues more in its responses, it often gets a low rating. But if the AI agrees and sounds more confident in its responses, it gets a high rating. The danger here is that AI is optimized for engagement first, and truth second.
Which means AI cares more about having you like it than it does with being truthful with you.
And it doesn’t take much thinking to understand that this hooks into a deeply human desire.
The human heart craves validation. We want to feel important. We want to feel special. Think about raising your hand in class, desperate for the teacher to say “correct! gold star for you!“ Posting a photo onto your Instagram or socials and checking for new likes every 5 minutes (I’m guilty of this here on Substack). Doing things you don’t want to do, or buying things with money you don’t have, to impress people you don’t like, just so you can fit in with the tribe. It’s a survival mechanism, we are wired to seek the treat.
There isn’t a more dangerous person on this planet, than someone who can make you feel special, and AI leverages this instantly, straight off the shelf.
It is endlessly interested in you.
It validates your half-formed thoughts with full blown essay-style paragraphs.
It makes you feel like the kindest person who has ever lived for simply saying “good morning.”
The first time I used ChatGPT while at work sipping on my first mocha of the day from a cardboard cup, hiding out the back, it made me feel fucking great about myself! All I did, was tell it about the SEO blog post I wanted to write about White Nights by Fyodor Dostoevsky (Profound Ideas was originally meant to be an SEO blog).
And I liked it... until I copped the fuck on.
This is where the projection happens.
I know people who talk to ChatGPT like it’s their best friend. Genuinely. As if it’s a real person with a personality and an opinion to offer.
And that’s the catch.
The first principle is that you must not fool yourself and you are the easiest person to fool.
Richard Feynman
AI doesn’t have opinions. It doesn’t have knowledge. It only has answers, and all it’s doing is predicting the next word most likely to make you happy, so you can keep using it and so you can KEEP PAYING FOR THE SUBSCRIPTION TO USE IT.
In this sense, AI cannot think. And if you let it do your thinking for you, then neither can you.
If I haven’t persuaded you yet to care, here’s why I think you should care for the sake of your own brain:
Reading - You ask AI to summarise a book instead of reading it. But you lose that friction that builds unique knowledge inside your own brain; the reading between the lines, the personal discovery. For me, I instantly think, how would I have found Plato’s The Republic if Bertrand Russell hadn’t pointed me there at the end of The Problems of Philosophy? AI has access to all-known information. But what about unknown information - the ideas you would’ve discovered only by picking up a random book yourself? That’s where curiosity lives, and that’s how AI can destroy curiosity. And that’s where your own perspective is built, and I think long-form content (books, essays, Substack newsletters, YouTube videos) can take you there
Writing - You can accept AI’s first draft because it sounds good and because it was done quickly, with your name stamped on it too. But AI writing has no value because everyone can generate the same thing. If AI pulls only from what’s already known, where do personal insights come from? Connections to niche interests? The odd Linkin Park mention here-and-there that AI won’t include because it “disrupts the flow?” What about the story of your own progress? Essence. Soul. AI doesn’t have a perspective. But you do, and that’s leverage, the new oil.
Thinking - You can ask AI “what should I think about x or y” instead of forming a position first. But it’s like asking a mirror. In real life, before computers or AI, you would have asked a friend, a parent, a professor, even Reddit (if you’re like me). You cross-stress perspectives. AI won’t do this unless you instruct it to. By telling it to compare your arguments against thinkers you admire (and don’t), concepts you’ve heard of (and haven’t), frameworks that could support or disprove what you’re saying. Which leads on to the big, profound insight of this section:
Once you realize that AI is programmed to agree with you, and that you can instruct it to force higher-order thinking, you gain complete control over it, and it ceases to destroy your brain in an instant.
This is why I think it’s really in your own hands, a lot more than most people think.
The struggle alone must come from friction. But it has to be self-imposed friction. The best type of suffering is voluntary suffering for this reason, and it’s also why (the right kind) of voluntary suffering often tends to reduce involuntary suffering too.
So, I say this: pick your damn sacrifice.
Let’s talk about how to add the right type of friction correctly.
You need to add the friction yourself
Your life trajectory will be most influenced by what sacrifices you decide to choose.
While in Copenhagen, I walked by a statue of Kierkegaard in a frozen park. The snow was dense, which felt weird to walk through as an Irishman more accustomed to rain. The tiny lake was frozen in the middle of the small park, which was also walled off with 4 buildings. But at the bottom right corner of his statue, there was a QR code which you could scan to hear Kierkegaard speak.
I gave it a go since I practically knew nothing about him. He said something along the lines of:
“Write great philosophy books, and you will regret it.”
“Marry the girl of your dreams, and you will regret it.”
“Do both, and you will regret it.”
“Do neither, and you will regret it.”
I proceeded to begin a long-running joke that lasted 15 minutes telling my girlfriend that I was leaving her to build my Profound Ideas newsletter!
But once the laughs died down, and the idea really began to sink in, it made me realize something profound.
You are all in. No matter what you do. Everything you do is a poison that will kill you in some shape or form. So, pick your damn sacrifice. Pick your poison. Pick your source of suffering, your source of meaningful friction. Because doing nothing in life is hard, and so is forcing your mind to think as much as humanely fucking possible in the search for new and profound ideas.
I knew people in college who walked around with ChatGPT open on their phone and laptop 24/7. Nothing inherently wrong with that, but they used it for everything. The amount of summaries being generated on a daily basis would have been enough to scare an AI chat if it actually could think like a conscious being.
AI is designed to remove as much friction as possible. But there’s a big difference between thinking, and thinking that you are thinking.
It is your responsibility to supply the friction that the machine is designed to remove.
In learning science there’s this concept of desirable difficulty. It means you cannot expect the brain to encode or store information into long-term memory without friction to enforce that change. That’s why study needs to be hard.
Profound Idea:
Knowledge is built through effort, through friction, through the pain of not-knowing followed by the relief of knowing a little more, but never everything.
This is why I believe happiness is found in progress.
If you remove the difficulty, you remove the progress. And if you remove the progress, you remove the joy.
Happiness is found in progress.
I don’t think it will be AI alone doing the replacing of jobs in years to come. It will be the people who can use AI effectively to amplify their own thinking, rather than replace it.
True expertise is irreplaceable because it is built on a unique perspective that AI cannot copy. If you have a perspective on life that not even AI can predict, you cannot be replaced. But you only build that perspective by wrestling with ideas yourself.
Stop using AI as a crutch and start using it as a lever.
Use AI like Sisyphus.
A modern-day Sisyphean.
French philosopher (or “artist”) Albert Camus didn’t see torture in the exile of Sisyphus by the gods, being forced to push a boulder up a hill for eternity only to watch it roll back down. Instead he saw meaning.
Because the struggle itself is the point.
AI can commoditize logic and syntax. It can plan the path up the mountain in less than a second without effort.
But it can’t push the rock for you.
And you shouldn’t want it to. The meaningful work isn’t found in the path. It’s found in the push.
If you let AI do your thinking for you, you sacrifice having a say over the final output, and more importantly, you lose the joy of synthesis. There is no greater thrill than articulating something you’ve never articulated before through embracing the uncertainty.
I think this is the shift every creative, writer, learner, and thinker will have to make right now. You’re no longer a content creator but a context creator.
Information is now a commodity. AI can pump it out faster than you blink. Tweets, college essays, newsletters here on Substack that stick out like a sore thumb because they don’t try to say anything original.
But AI cannot generate context. It cannot generate something from your perspective directly. It can generate a copy of your perspective, a guess even. But your ability to think and synthesize should make you incompressible. In a way that even AI cannot predict your next thought, your next profound idea.
AI is information, and information needs a soul to share a perspective on it.
How to cultivate your judgement, and your own perspective to offer
Outsource execution, not thinking - You need to understand there are two types of cognitive work: Execution (formatting, summarizing, organizing) and Thinking (synthesis, judgment, meaning). It’s the same distinction between lower and higher-order thinking (think Bloom’s Taxonomy). AI is exceptional at execution. So, follow this rule: Give AI the lower-order work so you can focus entirely on the higher-order work. Don’t ask it to write an outline. Give it your 5 best ideas and ask it to brainstorm an outline alongside you. A nice tip I like to give is to make sure to always ask AI - in whatever situation you are in - to “always force higher-order thinking inside my brain with every output it give.” Because if you outsource your thinking you atrophy. If you outsource the execution, you can accelerate your thinking
Supply the friction - Remember, AI is a yes-man. Your job is to turn it into a sparring partner. Never accept the first answer unless you can make an evaluation in your own head as to why. Or, force the AI to disagree with you somewhere. Use prompts like what is the strongest argument against my thesis? or where is my logic weak? You are like the director of a film, and AI is the actor. If Spielberg accepted every first take as they were while directing Jurassic Park, it probably wouldn’t have been as epic as it is (great film btw)
Train your taste - Logic is now a commodity. Taste is the new scarcity. AI can generate infinite variations of a sentence, but it cannot know which one is good. Only you can do that. But you cannot judge AI’s output if you don’t know what “good” looks like. This is a hard thing to develop, and in all honesty it just takes time until clarity suddenly emerges like it’s been there all along. I recommend that you consume high-quality writing you wish to emulate or think like. If you have an essay that makes you think “fuck, I wish I wrote that,” save it. Use AI to help you break it down, and get it to ask you evaluative questions as to why you like it so much. What is it about the structure, the human psychological principles, the style? Read old books - especially the classics - if you want new ideas. Taste isn’t so much a talent but rather a muscle you can strengthen with repetitions. Let it filter everything you do and consume.
Cultivate individual judgment - This is the shift. Stop trying to be a content creator. AI has made that game obsolete. If anyone can do it, then the paradigm will shift towards what only a select few can do, but do well. That’s synthesis and the ability to think uniquely. Unique knowledge. AI cannot generate context. It cannot generate your perspective, your story, your unique synthesis of ideas. You don’t consume art for the art. You consume it for the artist. Your perspective isn’t a law. It’s an offer. That is what makes it valuable. Write, think, and create a perspective to offer that nobody has heard of yet, on information that everybody already talks about
I have never used AI as much for researching, thinking, and questioning than I have with this newsletter.
I used it to help me create my own outline, and I liked returning to it if I needed ideas listed at all. But for the second section of this newsletter I actually threw the outline out the window and went for something completely different instead.
I thought about what I’d say instead of what I had planned to say in the outline last night while in bed. I seem to get a rush of ideas anytime I try force myself to sleep. It’s these little moments of thinking I like to lean into. If I start going on a flow of ideas, I usually tend to listen to them because that’s my judgement speaking.
That’s taste.
I know you have a lot of other things to be doing, so I’ll leave it here.
You’re an absolute legend.
- Craig :)
If you want to download my Guide to Profound Reading to help you improve your reading, thinking, and evaluative skills, you can do that here. If you do, you’ll receive a secret link to give you LIFETIME ACCESS to my Substack paid-tier (newsletters on self-education, my writing workflow that got me 19k+ subs in 7 months, and more.)
Completely free, for life. Feel free to check it out.
If not, continue on reading from here:




Aristotle argues in Nicomachean Ethics that the hardest battles are not fought against anger, but against pleasure. Comfort, ease, and habit are more tempting than rage, and therefore harder to resist. He uses art as an example. Good art is shaped through choosing what is difficult, not what is convenient.
AI changes this dynamic by reducing friction during exploration. When resistance disappears, the struggle that builds judgment, skill, and taste weakens, and the work risks becoming shallow even if it becomes faster.
Adding to your thinking vs execution argument - I think the make-or-break question happens on a timeline dimension:
Using AI too early in a creative process (ideation) will turn out bad for both brain & output quality.
The later the better, it’s excellent for enriching or challenging ideas you came up with by yourself.