Author Archives: Lukáš Křečan

Have an architecture north star

Imagine you have a magic wand and you could change your architecture all at once – what would it look like? That’s your north star. It’s a shared and documented idea about how the architecture would look if reality would not stand in our way. How many services would you have? How would they communicate? Where would the data be stored? What would the deployment and build look like? What about the tests? Do you know it? If yes, write it down. If not, get together with other clever minds in your company and write it down.

There are a few important points here. First, there should be broader agreement – it shouldn’t exist only in your head. Ideally, the north star will be something that can be shared with every engineer and technical manager in the company. Everybody should know where you are heading.

Imagine a team is implementing a new product. Should it be a new service or a module within an existing service? How should it communicate with other components? You really don’t want to have everyone reinvent the wheel or wait for an architect to provide all the answers. The north star architecture should answer most of the routine architectural questions.

Of course, the north star is just a direction, don’t be sad if you will never get there. Getting rid of your monolith will take ages, but the north star should show you how the architecure will roughly look once the refactoring is done.

Moreover, the north star is a moving target. It will change and it’s OK. You will learn new things, there will be new requirements, the business will pivot and so the north star will move accordingly. And this is good news, by iterating on the north star, you will learn a lot.

Additinally, there will be detours. The north star will point in one direction, but the everyday reality will force you to go in another one. And it’s OK too. The important point here is that you should know that you are diverging and why.

Lastly, the discussion on the north star helps you to focus on the long term. It’s way too easy to get swamped by the urgent short term tasks. Thinking about the north star gives you the opportunity to focus on the long-term. The product part of the organization has a long term vision, engineering should have it too.

62a190b81f6e0b6663d16336_Agile_autonomy-3300585152

I am looking for a job

I am looking for a job. In short, I am looking for a hands-on tech leadership role, ideally around the JVM ecosystem.

I can offer

  • Pragmatic approach to software design and system architecture
  • I have learned to prefer simple design
  • I am able to evaluate trade-offs and pick the least worst solution.
  • Expertise in Java and Kotlin, Spring, and Spring Boot
  • Experience with microservices
  • Experience with Kafka and event-driven systems
  • REST APIs
  • Postgres
  • I am a huge fan and evangelist of TDD and automated testing in general.
  • I really love pair or mob programming. I really believe it provides a huge advantage to the teams. I can help you adopt it.
  • Tech leadership. I have experience in technical leadership of teams and coordination of multiple teams.
  • Governance. I have lead and participated in REST API, GraphQL, OCSF governance
  • Architecture guidance.
  • I am not afraid to make decisions.
  • I am good at cutting the scope.
  • If you want to see my code, you can check my open source projects.
  • Or you can check some of my talks. The ToC one is pretty good.

I am looking for

  • Either a place where I can learn something, or have enough influence to make positive changes.
  • I really love coding, but I enjoy design, architecture, tech leadership, mentoring etc. Therefore, I am looking for some tech leadership, architecture roles. Staff engineer is the role that suits me best.
  • Autonomy, Mastery, Purpose
  • I am looking for a company that understands the importance of trust and ownership.
  • I would like to work on something that makes the world a better place.
  • In Prague, CZ or remotely in EU timezone

My CV and LinkedIn profile.

Are you interested? Send me an email to lukas@krecan.net

My AI experiments

These are just a few notes about my experiments with AI coding tools.

Key takeaways

  1. AI tools will indeed change the way we code. It’s a useful tool for some tasks, a distraction for others.
  2. To me, they are especially helpful for chores I don’t want to do. (learning and forgetting yet another YAML configuration format, writing boilerplate code, bash scripts etc.)
  3. AI is far from replacing engineers. It often gets stuck on trivial issues and then attempts random changes to fix the problem.
  4. AI usage leads to a lot of generated code. Without AI, my laziness pushes me to code reuse, abstraction and better design. AI can generate hundreds lines of code in a minute. My laziness pushes me to accept all the code, which can lead to a worse design.

All of the examples below are related to ShedLock, an OS project I am maintaining.

New Firestore provider

ShedLock can integrate with various DB technologies and from time to time I get a request to support a new exotic one. Recently I got a request to support Firestore. Usually I ask the requestor to send a PR. This time, I asked Junie to implement it. The process is pretty streamlined, the API is well defined, the structure of the code is similar between technologies, there is a test-suite that needs to pass, so this is an ideal task for AI.

Junie checked the structure of the project, implemented the code. Then I asked it to implement tests. Again, it found which class to extend to get the test suite, configured Firestore in testcontainers and almost did it all. But then it got stuck on a simple problem, it was configuring the test client using setHost("http://" + firestoreEmulator.getEmulatorEndpoint()). It’s kinda obvious that the host should not have the "http://" prefix, but not to the AI. It started to configure random environment variables, and do other random changes without solving the problem. But other than that, AI has been a huge time-saver. The implementation took like 15 minutes, you can see it here.

Fixing BOM

ShedLock provides BOM with a list of all modules. So every time, somebody adds a module, they should add it to the BOM. And surprise, surprise, people often forget. I wanted to check if there are some modules missing, but checking it manually is boring. You need to compare the list of modules with the BOM. Or you can ask your friendly AI and you are done.
Again, a huge time-saver. But what if you want to check the completeness in the build-time for each PR?

Checking BOM completeness

Let’s ask Claude “Is there a way (Maven plugin or something similar) to check that all modules are mentioned in BOM”. Claude goes directly for a bash script, trying to use xmllint. Which is not installed on my machine, I then try to steer it to use Java or Groovy, but at the end, it tried few approaches and it came up with a bash script like this . Does it work? Yes. Do I like it? No. Do I know a better way? No. And this is the tricky part. I have a working solution, which I don’t like. Maybe there is a better one. Maybe one of the solutions the model tried was the right one and did not work due to some trivial issue. I can continue prodding the AI to give me a better solution, and spend the whole day doing that.

For example, I can ask it to use Maven, it will praise my cleverness, and will spit out a Maven enforcer rule like this. Do I like it? Almost.

This is interesting. Without AI, I would have checked more thoroughly if there is no better way. If not, I would have abandoned the task as the value is not worth the effort. Or, I would have implemented the plugin, and shared it with others. But with AI, the effort to publish it is way bigger than the almost free implementation. So I will keep the plugin for myself. I am afraid we will see this more and more often. Instead of reusing tools and simple apps, we will generate tailor made ones. With AI it’s faster and easier to generate a simple tool than to search and evaluate existing ones.

Debugging flaky test

I had a flaky test. Time to time I got a few millisecond difference in one of the tests.

Error:    MsSqlExposedLockProviderIntegrationTest>AbstractJdbcLockProviderIntegrationTest.shouldCreateLockIfRecordAlreadyExists:81->AbstractLockProviderIntegrationTest.shouldCreateLock:49->AbstractJdbcLockProviderIntegrationTest.assertUnlocked:56 [is unlocked] 
Expecting actual:
  2025-06-19T16:01:23.840Z
to be before or equal to:
  2025-06-19T16:01:23.839Z

Here the AI failed completely. Event though, the issue was caused by a subtle bug in the code (the fix is here) Most of the models wanted to fix the test by adding a buffer. And if I pointed that I suspect a bug in the code, this is what I got

Now I see the issue! The problem is that the Exposed provider is using LocalDateTime (datetime columns and CurrentDateTime) but the test is expecting timezone-aware timestamps.

Timezone issue causing a millisecond difference? Come on.

To be fair, the AI provided other plausible possible reasons for the issue, but it never guessed the right one. One of the models actually pointed to the line with the bug, but praised it for correctly handling MsSQL time rounding issues.

So to summarize, AI is a useful tool, if you are not already playing with it, pick a chore you are procrastinating on, and give it a try.