Learning to Use AI Like a Teammate (Not a Magic Button) 🤝
I’ve been using AI tools for a while now, and have been in the AI social media space for long enough to know that people who don't have experience using AI tools think that you can just do one single prompt and get what you want.
In my explorations it quickly (almost immediately) became apparent that there's more to getting actual value out of AI than that, and that there is a learning curve, and that someone could actually build skill at prompting effectively.
For work the breakthrough was when I stopped treating AI like a one-shot answer machine and started treating it like a junior collaborator that needs clear constraints, patience, and feedback. It really made a huge difference in the quality of results I got back and quickly went from "oh, this is a fun little thing" to "OH, this actually makes my workload lighter."
A recent example project was itself quite mundane: planning summer camps for my kids. Each week, you had to pick one class from four categories. There were nine weeks. Some weeks had limited options. Some options are repeated across weeks. My goal was simple in theory and tricky in practice: maximize variety and minimize repeats.
At first, my ChatGPT prompts were vague. I’d ask something like:
“Can you organize these camp options into a spreadsheet?”
The AI did exactly that — and nothing more. So I refined:
“We can only choose one class per category per week. Can you recommend picks that avoid repeats?”
Better. But still not quite right. The real progress came when I started externalizing constraints explicitly and letting the AI maintain state over time:
“Don’t output the final result yet. I’m going to paste week-by-week data. Track what’s already been used and only adjust earlier picks if it becomes mathematically necessary.”
That changed everything. ChatGPT stopped wasting time and tokens summarizing the data for each pasted week and focused on ingesting and aggregating the actual data. It flagged “forced repeats” caused by limited catalogs. It suggested small reshuffles to eliminate collisions later. It even explained why certain repeats were unavoidable, which turned out to be the most valuable part.
Ultimately, the most effective prompting seems to be about separating the problem definition, data ingestion, planning, and execution into carefully reviewed steps. Each clarification, like “hold on, only one class per category,” or “optimize globally, not week-by-week,” was.
The takeaway for me: Define constraints. Iterate. Let the system reason. And don’t expect a usable result from a single prompt, but after some collaboration and refinement.
Comments