AI slop with my name on it
I love AI. I’ve been doing work that I would absolutely not be able to do a year ago with Claude. I’ve also had issues with AI at work. A few failure modes that I’ve been thinking about all revolve around one theme: it’s easy to put your reputation in the hands of AI without realizing it, and let it do dumb things on your behalf.
Bad: Using AI to write standup updates
I have a Claude in my obsidian vault. Every day we chat, it looks at my PRs via the gh cli and tells me that I did a good job. It keeps notes. A natural conclusion from this is that Claude could just write my updates for me. This sounds good but doesn’t work.
There are a few obvious problems here.
Claude has the tendency to try to be “helpful” and write all the details it has. This makes for insanely elaborate updates where only me and Claude know what it’s talking about. Useless update.
You would think you can fix this by asking Claude to be concise. This also doesn’t work because Claude then doesn’t have good enough judgement to only write the useful things that you actually want to communicate.
I tried a bunch of things to iterate here. Eventually, I decided to just write my updates myself. It’s much easier to just write the sentences yourself that to try to beat Claude’s writing into the correct shape.
Bad: Making AI code review your team’s PRs for you
Here’s the thing: If I wanted Claude to review my PR, I would get Claude to do it myself.
To be fair, there are good ways to use AI for code review and bad ways to use AI for code review. Here are a few patterns that I quite like:
An AI reviewer that runs on PRs automatically and finds objectively incorrect code / bugs.
A human discussing the PR with Claude to gain context on the changes and then explicitly filtering all the AI’s concerns and converting them to comments that are useful.
And here is a pattern that I absolutely hate: “Hey Claude, review this PR, leave comments using the gh cli”. Claude then proceeds to leave ridiculously nitty comments that don’t make sense beyond the surface level. If you do this, you are playing quick and fast with your reputation.
Bad: Low quality documents justified via “written by Claude”
I’ve been using AI for design docs. Stuff that I have the final shape of in my head but want to write down quickly. It’s good for that. However, I have fallen in the trap of getting Claude to scope a plan and write it down, share it with my team and then realising it is just complete trash. Something obvious is missed, Claude has made some assumption that actually makes zero sense. These have been some of the most embarrassing docs I’ve ever written.
Another anti-pattern that’s common is: “I had a conversation with Claude, it brainstormed a long list of stuff we could do for project X, here’s the doc with the list for inspiration”. These things are useless. The long list of stuff will look good at the outset but if you dig deeper, ideas are unrealistic in real life. The doc now has 15 people who’ve read it with no value coming out of it. All of this could have been avoided if you spent 15 minutes more and actually dug deep instead of just putting the first thing Claude spit out in a Google doc listicle.
Neutral: AI for PR descriptions
You would expect AI would write good PR descriptions. Just like design docs, PR reviews and standup updates, you would be mostly wrong. The good thing here is that the median human written PR description also is mostly trash. With humans, the issue was you would get PRs with ”No description” and you need to figure out what to do. With AI, instead of no description, you get a large description that goes too much into the weeds without much real context of the motivation. This is still probably better than the state a few years ago, but worse than being in a team that cared about PR descriptions and actually took the time to fill them out.
Good: Performance reviews
The one place the Claude in my Obsidian seems insanely helpful is performance review season. Here’s a few very useful questions I can ask it:
“Can you find me examples of interactions that show I helped in project X”?
“Look through my pull requests and point out work I’ve done outside my main project that I might have forgotten”
“What interactions have I had with people outside the team that I’ve forgotten to put and would be useful?”
In the past I’d have to go rumbling through my PRs manually, and I’d still miss things! Especially for conversations on other projects that led to improvements, it’s easy to forget, but quite valuable to capture. This is one of the cases where Claude being exhaustive and verbose is probably better.
The meta theme here is, with every interaction, there is a reputation on the line. Impressions are formed. Trusting Claude or GPT to do things for me means putting my name on the line. It’s easy to fall into the hype and let Claude write beautiful sentences that mean nothing. It’s still hard to actually write something that is valuable and AI isn’t the best tool for this.
PS: I would absolutely love tips on how I’m using AI wrong here and how to get more value out of these tools. Please send things my way, it would be much appreciated.

