Investors Checking Up
My supervisor casually asked if I could write up a bit about "how we are using AI these days" that he can use in a presentation to investors. I'd be delighted to share what I sent him:
## How We Use AI in Our Workflow We see AI agents as collaborators in our engineering processes rather than just tools that generate code. Each session begins with a clear goal rather than a rigid specification. The AI proposes a direction, and if it veers off track, we're quick to provide feedback and adjust course instead of starting from scratch. This approach of "steering" rather than constantly re-specifying keeps us moving swiftly. We also rely on a multi-agent review process across our different tools. One agent might draft and implement changes while another reviews the pull request (PR). The initial agent then addresses the feedback, and we continue to iterate. Each agent has its strengths; some are great at identifying edge cases while others excel in the hands-on work of resolving issues. Comparing their outputs leads to more reliable results than trusting any single agent's initial attempt. ## AI in Story Grooming (Planning Mode) One of the most effective ways we use AI is in the grooming phase before a task is assigned. In this planning mode, the AI doesn't write any code; instead, it reviews the task ticket, explores the relevant codebase, and outlines a detailed implementation plan. This plan includes files likely to change, affected modules, necessary migrations, touchpoints for authentication and permissions, and any assumptions that need validation from a human. There are two big benefits to this approach: 1. **A clear plan for future agents to follow**: The output from the grooming stage becomes the initial prompt for the implementing agent. They don't need to figure everything out from scratch; instead, they start with a human-approved plan, which significantly cuts down on wasted time and effort. 2. **Identifying risks early on**: This process also helps surface issues such as regressions, edge cases, and important considerations (like data shape changes and backward compatibility) before we commit to a solution. Catching these things during the grooming phase is much cheaper and less risky than finding them later in reviews or, worst-case scenario, in production. Our typical workflow looks like this: we groom with AI, a human reviews and confirms the plan, we implement it with AI based on the validated plan, and then another agent conducts the review. Each step helps minimize potential issues for the next stage. ## What Lies Ahead We're getting closer to handing off both grooming and implementation tasks directly to our AI agents for smaller, well-defined tickets. With this setup, we'll have human reviewers to oversee the output rather than driving the process themselves. The review and planning disciplines we've built make this shift safer, knowing that the agents are working within parameters we've thoroughly validated. While more complex tasks will still require human input, our ability to let agents manage simpler tickets is steadily improving. ## Habits That Have Served Us Well - **Build Security/Process Adherence into the AI Workflow**: Everything that can be ultimately gets baked right into the workflow: Coding conventions, commit message formatting rules, Jira ticket requirements, release notes, updated documentation, etc. - **Avoid quick fixes**: If an AI suggests a hacky workaround, we push for a more robust architectural solution instead. In the long run, this often saves more time and effort. - **Build skills/hooks/scripts to efficiently handle repeated tasks**: When we find agents struggling with the same task repeatedly, or simply identify something that is done often, it's often best to build an agent skill for that task so we can more clearly define the output, and save on token use. - **Document environment constraints once**: We ensure that anything an agent might need to rediscover (like platform quirks and existing configurations) is documented in shared instructions. This way, it's automatically loaded in future sessions. - **Track tasks carefully**: For intricate projects that span multiple areas (like API, UI, and infrastructure), we keep close tabs on tasks and use sub-agents for independent research in parallel. In short, we leverage AI agents as agile and adaptive partners throughout the entire process — from grooming and implementation to review — rather than just as tools for quick text completion. The real value lies in the interactions between humans and agents and among the agents themselves.
Comments