Before We Automate Anything ⚙️
I've been writing software long enough to have lived through a couple of hype cycles.
Some felt like hidden secrets, some were loud and noisy and useless. Many were sold as inevitabilities and quietly became maintenance burdens for unused features (I'm looking at you, Alexa). So when generative AI started going from "that's neat" to "the main thing investors wanna hear about" and "part of every product roadmap everywhere all at once," my first instinct was... this'll be a fad, and I'm not going to get too invested.
I played with Midjourney a bit, I used ChatGPT like a search engine, I toyed with Grok, I made an AI-powered chatbot for Twitch Streamers, I built a few lightweight AI-based tools for work projects, but none of it really impressed me much.
But something has shifted in the last few months, and now I'm paying attention.
Right now, AI feels like a powerful tool that was just left running in the workshop. It's impressive, but it's also dangerous and can do more damage than good if wielded uncarefully.
Somewhere over the last few months, my occasional dabbling here and there out of curiosity, or my use of ChatGPT to replace Google or Stack Overflow for certain things, turned into really focused, guided prompting with organization and purpose.
I stopped occasionally feeding it a function seeking a single particular syntax clarification, and using it more as a sounding board for planning a task, as well as a draft writer for new feature blocks, tests, and utility files.
Something's gotten A LOT better with both the tool itself and also my understanding of how best to leverage it to make my own workflow more efficient, and I've decided that it's time to turn my dabbling into a serious exploration.
I'm looking to figure out what this actually does well for my workflow and where it absolutely fails. I'm not trying to prompt my way out of work. I'm interested in quite the opposite: where can AI remove friction so that my planning and execution can be more focused and effective?
So before automating workflows, replacing tools, or inventing new ones, I’m starting with smaller questions:
- What parts of my work are energy drains but not value creators?
- Where am I repeating myself just because that’s how it’s always been done?
- Which decisions require judgment, and which just require patience?
Most of my curiosity right now lives in what I call the leg-work zone: expanding test coverage, refactoring existing code, and updating nested dependencies, reviewing and diagnosing issues in existing code, possibly adding in some sort of assisted code review and/or sanity-check-level QA, the stuff that doesn't require a ton of decision making or creativity, just effort.
Anyway, this isn’t a productivity blog. It’s not a tutorial site. It’s definitely not a thought-leadership platform (if you ever see that phrase here, please call me out).
This is a notebook for me to document what happens when I treat AI the same way I would treat any other powerful abstraction: skeptically, incrementally, and with a bias toward understanding the failure modes before trusting the success cases.
If something here sounds unfinished, that’s because it probably is. If something sounds overly cautious, that’s probably intentional, too.
We’ll get to the tools. We’ll get to the workflows. We’ll even get to the mistakes.
But first, it felt important to say this out loud:
I’m not trying to remove the human from the loop.
I’m trying to figure out where the loop actually is, and I'm doing that by putting a duck in there that I can talk to.
Comments