These are just a few notes about my experiments with AI coding tools.
Key takeaways
- AI tools will indeed change the way we code. It’s a useful tool for some tasks, a distraction for others.
- To me, they are especially helpful for chores I don’t want to do. (learning and forgetting yet another YAML configuration format, writing boilerplate code, bash scripts etc.)
- AI is far from replacing engineers. It often gets stuck on trivial issues and then attempts random changes to fix the problem.
- AI usage leads to a lot of generated code. Without AI, my laziness pushes me to code reuse, abstraction and better design. AI can generate hundreds lines of code in a minute. My laziness pushes me to accept all the code, which can lead to a worse design.
All of the examples below are related to ShedLock, an OS project I am maintaining.
New Firestore provider
ShedLock can integrate with various DB technologies and from time to time I get a request to support a new exotic one. Recently I got a request to support Firestore. Usually I ask the requestor to send a PR. This time, I asked Junie to implement it. The process is pretty streamlined, the API is well defined, the structure of the code is similar between technologies, there is a test-suite that needs to pass, so this is an ideal task for AI.
Junie checked the structure of the project, implemented the code. Then I asked it to implement tests. Again, it found which class to extend to get the test suite, configured Firestore in testcontainers and almost did it all. But then it got stuck on a simple problem, it was configuring the test client using
setHost("http://" + firestoreEmulator.getEmulatorEndpoint())
. It’s kinda obvious that the host should not have the "http://"
prefix, but not to the AI. It started to configure random environment variables, and do other random changes without solving the problem. But other than that, AI has been a huge time-saver. The implementation took like 15 minutes, you can see it here.
Fixing BOM
ShedLock provides BOM with a list of all modules. So every time, somebody adds a module, they should add it to the BOM. And surprise, surprise, people often forget. I wanted to check if there are some modules missing, but checking it manually is boring. You need to compare the list of modules with the BOM. Or you can ask your friendly AI and you are done.
Again, a huge time-saver. But what if you want to check the completeness in the build-time for each PR?
Checking BOM completeness
Let’s ask Claude “Is there a way (Maven plugin or something similar) to check that all modules are mentioned in BOM”. Claude goes directly for a bash script, trying to use
xmllint
. Which is not installed on my machine, I then try to steer it to use Java or Groovy, but at the end, it tried few approaches and it came up with a bash script like this . Does it work? Yes. Do I like it? No. Do I know a better way? No. And this is the tricky part. I have a working solution, which I don’t like. Maybe there is a better one. Maybe one of the solutions the model tried was the right one and did not work due to some trivial issue. I can continue prodding the AI to give me a better solution, and spend the whole day doing that.
For example, I can ask it to use Maven, it will praise my cleverness, and will spit out a Maven enforcer rule like this. Do I like it? Almost.
This is interesting. Without AI, I would have checked more thoroughly if there is no better way. If not, I would have abandoned the task as the value is not worth the effort. Or, I would have implemented the plugin, and shared it with others. But with AI, the effort to publish it is way bigger than the almost free implementation. So I will keep the plugin for myself. I am afraid we will see this more and more often. Instead of reusing tools and simple apps, we will generate tailor made ones. With AI it’s faster and easier to generate a simple tool than to search and evaluate existing ones.
Debugging flaky test
I had a flaky test. Time to time I got a few millisecond difference in one of the tests.
Error: MsSqlExposedLockProviderIntegrationTest>AbstractJdbcLockProviderIntegrationTest.shouldCreateLockIfRecordAlreadyExists:81->AbstractLockProviderIntegrationTest.shouldCreateLock:49->AbstractJdbcLockProviderIntegrationTest.assertUnlocked:56 [is unlocked]
Expecting actual:
2025-06-19T16:01:23.840Z
to be before or equal to:
2025-06-19T16:01:23.839Z
Here the AI failed completely. Event though, the issue was caused by a subtle bug in the code (the fix is here) Most of the models wanted to fix the test by add a buffer. And if I pointed that I suspect a bug in the code, this is what I got
Now I see the issue! The problem is that the Exposed provider is using LocalDateTime (datetime columns and CurrentDateTime) but the test is expecting timezone-aware timestamps.
Timezone issue causing a millisecond difference? Come on.
To be fair, the AI provided other plausible possible reasons for the issue, but it never guessed the right one. One of the models actually pointed to the line with the bug, but praised it for correctly handling MsSQL time rounding issues.
So to summarize, AI is a useful tool, if you are not already playing with it, pick a chore you are procrastinating on, and give it a try.