backmay 10 2026

weekly recap #3: may 4 - may 10, 2026

writing more, a consequence of AI

since i started using AI agents more, i started writing a lot more. i asked ampcode to analyze my obsidian vault (where i write mostly) to see the impact in real time

Words written / month
2025-08  ▌ 1.3k
2025-09  ███ 7.1k
2025-10  ██▏ 5.1k
2025-11  ██▉ 7.0k
2025-12  ▉ 2.2k
─────────────── AI agents adopted (end Jan / early Feb 2026)
2026-01  ██████▍ 15.4k
2026-02  ██▏ 5.2k (note: pto/vacation)
2026-03  ███████▏ 17.1k
2026-04  █████████▌ 22.8k
2026-05  █▉ 4.7k  (7 days only - on pace for ~21k)

i started writing more because i feel like i need to. agents took a lot of cognitive load off of me, which creates this kind of vacuum that im not used to. its odd to go from coding multiple hours a day for years to now not coding at all. i think this post hits the mark:

Even though the AI almost certainly won't come up with a 100% satisfying solution, the 70% solution it achieves usually hits the “good enough” mark.

i tend to be pragmatic, so i feel like i was quick to rely on AI more than most people. even though their work is a bit worse than mine, they are so much faster, which is usually a fine tradeoff for most things. my experience is that a lot of the mistakes they do make are either due to: (1) bad prompting/outcome definitions, or (2) poor quality of code. on the poor quality of code point: i feel like its still functional, but it wont be good for the code base long term.

the end result is that cognitively i dont think as deeply, as often, as i used to. its a weird feeling. it feels like cheating on an exam. so i found myself reaching to just write more.

when im working, i try to create intensive, deep plans. if i dont, i find myself falling into this trap of relying on the agents to think for me. when reading, i now like to write down notes, thoughts, related things (see the weekly recaps for an example). i now itch to start drafting an essay about any esoteric thought i think is interesting.

i also started playing a lot more chess. i know chess is not really a "thinking" game as much as it is pattern recognition / memorization, but i find more enjoyment in playing it and striving to improve than i used to.

what i read this week

  • i recently started reading The Scaling Era: An Oral History of AI, 2019–2025 and will be dropping some thoughts, reviews, and quotes from the book i find interesting. the recap this week is a bit short in breadth because i was spending most of my time reading this book, so most of the recap will be about this book :)
    • so far, i am two chapters (out of 7) into the book, and it is so easy to read. its the first time i read a book that exposes concepts to the reader via dialogue. the book is comprised of a ton of various excerpts from podcasts/interviews that Dwarkesh Patel had with notable people from places like OpenAI, Anthropic, and more. this style of writing is so easy to digest, follow along, and stay interested in. its also stimulating to see changes in talking style depending on who is talking.
    • my favorite recurring theme is the use of the human brain as a point of reference / thing to compare. some interesting points that stuck out the most:
      • AI model training has made it clear that the human brain is extremely efficient, both energy and training wise:
        • the models are maybe two to three orders of magnitude [100x to 1,000x] smaller than the human brain, while at the same time being trained on three to four orders of magnitude [1,000x to 10,000x] more data. Compare the number of words a human sees as it's developing until age 18. I think it's in the hundreds of millions. Whereas for the models, we're talking about trillions.

      • its an interesting point — how come these models need to be trained on almost all of human knowledge before they begin generalizing in a meaningful way?
  • Gamestop proposal to acquire ebay!
  • The story of Mel, a Real Programmer
    • this is somewhat related to some discussion in last weeks recap about “lights-out codebases”, where coding is fully the work of agents and humans no longer look at any code. not even doing any reviews! thats no longer programming. i dont know what it is. its not wrong, i dont think we should avoid it. but there is this sense of loss when doing it - like something was lost. it still exists, but it no longer makes sense to do it.
  • the machine fired me
  • I miss thinking hard
    • i relate to this heavily - i would also claim that i love thinking deeply, but in a pragmatic sense where it is required and necessary for a problem. this was the first time ive seen someone phrase it correctly:
    • Even though the AI almost certainly won't come up with a 100% satisfying solution, the 70% solution it achieves usually hits the “good enough” mark.

    • if you are pragmatic, a mix of practicality/builder/thinker, AI kinda ruins things for you. you do not get the satisfaction of thinking about a deep problem anymore, because that 30% improvement on what the AI is able to come up with is usually not worth it. but the satisfaction you get from building faster kinda makes up for it. but not completely. its like limbo. i feel like thats why i have had this urge since i started using AI to write more.
  • Don't delegate understanding
    • this one is from 2023, and seems to have gotten repopularized on twitter the past couple days. originally it did not seem like it was framed for AI/agents, but it feels like it can be extrapolated to that too. engineers often talk about using AI to just implement what they need and they keep themselves as the coordinator and planner. but i feel like that is cope and signaling. its extremely hard to do this in practice and avoid delegating more and more things to agents. even if you do not realize this, you are most likely delegating a lot of your previous understanding of problems to agents (me included). a lot of understanding is in the process of implementing. the way i see it, its like thinking about writing an essay and assuming you have it all figured out. but in the actual process of writing it onto paper, you start to see holes. agents are similar, they present ideas and plans so confidently that its difficult to notice these holes.
  • notes apps help us forget
    • great piece
    • Flipping through your old notes suddenly “feels like sifting through stale garbage,” as Dan Shipper found, disillusioned after building a galaxy of notes in Roam Research. It turns out most of our ideas and discoveries aren't actually worth that much, not on their own anyhow.

deor.app/posts/weekly-recap-may-4-may-10-2026

more like this

© 2026 deor