You own your data – forever. You don’t need to keep paying monthly fees if you decide to switch to a different system just to access your historical data and conversations.
I’ve been advocating that development discussions should happen in Git* issues/PRs/projects whenever possible, so the conversations have context. One requirement for this to work long term is that you need access to these conversations for the duration of a product, and perhaps beyond.
There is a hosting cost, but it is minimal ($5/mo on a Linode server). There is also some ops cost to keep things running and up to date, but in my experience, Gitea is cake to self-host compared to things like WordPress or Email. The benefit-to-effort ratio is huge.
Git driven CD (Continuous Deployment) typically means whenever you merge to main , or tag a repo, something automatically gets deployed. It can be something as simple as a static website being updated.
One of the biggest advantages to this is that you know for sure the code was checked in and tagged. And you know exactly what code was deployed. There is no manual process involved to mess it up.
This may not seem like a big deal, but it is. As humans, we get lazy and forget to do the simple things like manually tagging a repo – especially on small teams where process is not emphasized. However, CD is a way to have a process without having a process. You do it in the name of saving time, but it also gives you enough process that things are now also consistent.
Michael Feathers defines legacy code as code without tests. He further elaborates in his book:
“Code without tests is bad code. It doesn’t matter how well written it is; it doesn’t matter how pretty or object-oriented or how well-encapsulated it is. With tests, we can change the behavior of our code quickly and verifiably. Without them, we really don’t know if our code is getting better or worse.”
Without automated tests, any modification to the codebase is risky, as there is no reliable way to know if a change introduces defects. This lack of feedback forces us to rely on manual testing or institutional knowledge – often lost over time – which makes maintenance difficult and error-prone. It is not about the age of the code or who wrote it, but about whether the code can be changed safely. If you cannot verify behavior through tests, the code is “legacy” regardless of its age or origin.
“past performance is not indicative of future results”
The above phrase is a disclaimer we often see associated with investments.
However, this applies to more than investments …
A technology that was cutting edge in the past may not be the best option for the future (example building web applications in PHP vs Go).
People who did good work in the past usually continue to do go work, but not always. (people can lose interest, suffer burnout, have personal problems, etc.)
Workflows used in the past do not always in the future (example, requiring patches on mail-lists will probably alienate a significant portion of the developer population who were raised on pull-requests).
As the level of integration in integrated circuits continues, the IC we used in the past may not be the best solution compared to more integrated solutions available in the future.
How we proceed in the future needs to be a combination of what we learned in the past (experience) and the technology available in the future (vision). Again, dysfunction arises when we mix these two modes up.
“past performance is not indicative of future results”
The above phrase is a disclaimer we often see associated with investments.
However, this applies to more than investments …
A technology that was cutting edge in the past may not be the best option for the future (example building web applications in PHP vs Go).
People who did good work in the past usually continue to do go work, but not always. (people can lose interest, suffer burnout, have personal problems, etc.)
Workflows used in the past do not always in the future (example, requiring patches on mail-lists will probably alienate a significant portion of the developer population who were raised on pull-requests).
As the level of integration in integrated circuits continues, the IC we used in the past may not be the best solution compared to more integrated solutions available in the future.
How we proceed in the future needs to be a combination of what we learned in the past (wisdom) and the technology available in the future (vision). Again, dysfunction arises when we mix these two modes up.
In the realm of product development, there are two powerful forces – experience and vision.
Experience (the past) is a guide – a help to avoid serious blunders and mistakes. Experience can help us avoid dead-end tangents. Experience gives us a gut feel that is critical at times. Experience helps us be more efficient if we can make fewer mistakes.
Vision (the future) is where we are going. It is seeing things that have not been done before. It is exploring new combinations of technology to build compelling products. It is adopting new technology, including tools, software, components, and workflows. It is about understanding the needs of your customers.
Without experience, we are on the path to collapse.
It is very true, we need great people in our organizations. And I have the utmost respect for those who can motivate and lead people. I also have the utmost respect for the capabilities of humans – that divine spark of creativity and the ability of the human brain to solve problems and make new connections.
However, to do much in this age requires people to work together and build on the work of the past. What one person does, another person needs to continue. The lone accomplishments of any one individual may be impressive, but without continuity and scale, there is likely to be very little lasting impact. Without systems in place (YOUR Platform), the brilliant work of your people will not be utilized to their full potential.
In debates about technology, people bring up points, such as an Arch Linux update occasionally breaking and not booting; thus, Arch Linux is bad.
While the above point is true, it is also irrelevant because it is not the entire story. What about:
The hours upon hours I’ve saved updating and installing stuff because pacman is so fast.
The 100’s of packaged tools and applications that save me from installing them manually.
The AUR, which has everything that is not already packaged, making it easy to install even the most obscure piece of software.
The benefits of using the latest and best versions of everything. After all, if something is getting better, may as well benefit from it now vs later.
The simplicity of PKGBUILD, which allows me to create and modify my own packages easily.
I could go on, but you get the point. After running Arch for over 10 years, let’s conservatively say I’ve saved an hour a week by using Arch – that is 520 hours. I’ve spent maybe 5 hours working through broken stuff in Arch in 10 years. That is a benefit-cost ratio (BCR) of 104. So yes, the 5 hours are real, but they are irrelevant.
Innovation is fueled by vision. Without vision, things can be directionless, drifting toward obscurity. So, how to encourage vision? Perhaps the first thing is not to discourage it. Ironically, the more successful an organization is, the more likely this is to happen. This happens when people mistakenly confuse experience with vision. In the realm of technology, how things were done yesterday, while a useful guide and foundation, will not get us to where we need to go tomorrow.
To leverage experience, we need to capture it. And to capture it, we need process. Without this, leveraging the experience of our best people is at best random. If we can capture experience, then we have it forever, even when our best people inevitably move on.
After yesterday’s post, I was asked: “What’s your process to capture experience?”
The most important thing is to get information out of transient mediums into permanent mediums.
What are the attributes of a permanent medium?
Searchable
Persistent (exists over time, backed up)
Easily accessible to all relevant parties over time, even those who join late
Well organized
Has context
Improved over time
Turned into automation where possible
The first two attributes are the easiest. While email, chat, and meetings are useful in teams for communication and notifications, they are NOT permanent mediums for information. Even though they may be persistent and searchable, they fail on the other characteristics.
In part 1, we discussed the importance of getting information into permanent mediums. Below are a some that I use:
Workflowy. This is my 2nd brain with 1000’s of notes, links, howtos, etc. It is easy to share Workflowy notes with others as needed and multiple people can collaborate on a note with just a link – no account needed. I can’t recommend Workflowy enough – it is so flexible and powerful.
Reflection is another important aspect of the experience capture process. Do we carefully evaluate and critically think about what happened? We don’t do this to blame ourselves or others, but rather to recognize areas where we might improve. Reflection can also include recognizing things that went well and what we need to keep doing.
Some examples:
Personally spend 5m journalling at the end of each day. What went well? What did not? What are we thankful for?
Periodic project review – what have we learned? Is what we learned captured?
Pull requests – have we captured everything we have learned when we did this work? Are there documentation and tests so the work can be extended and reused?
Look for opportunities to automate. Automation is one of the best ways to capture experience.
If we do not reflect, we are destined to keep repeating the same mistakes and needlessly doing the same work over and over.
One of the most tragic and common forms of human nature is to just keep doing what we are doing, even if it is not working.
AI is really good at summarizing existing information – Perplexity Pro has mostly replaced Google/Duckduckgo, etc., and is worth every penny. AI is really good at finding API documentation – much quicker than searching through docs, and often provides a useful snippet of how to use an API. Occasionally, I’ll use AI to write a quick Python script where I don’t care how it works.
The most useful scenario is when you are having an intelligent back-and-forth conversation with AI, and you understand what is going on. What does not work is just lazily expecting it to do all your work so you don’t have to understand it – especially the hard parts like architecture. If AI is doing things that you don’t understand, ask it to teach you. Write tests so you can be sure it is right. Trust, but verify.
I’m currently working on a project with a smart intern working on the frontend code. He is working fast and appears to be vibe coding some of it. Some of it is a bit of a mess, but it mostly works. Some parts have obviously never even been tested. Is vibe coding in this case a net gain? Probably, but I’m not sure yet – will know more once we try to maintain this code.
On a project that is complex enough to require some architecture, coding is not the hard part. Architecture, requirements, integration, testing, debugging, support … all this stuff that is required to get it right swamps any coding effort. As one person said, “Code is a liability, not an asset. Aim to have as little of it as possible.”
At this point, don’t aim to have AI write all your code for you. Use AI to help you write better code faster that YOU understand.
“It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” - Mark Twain (maybe)
AI is really good at summarizing existing information. It is less good at thinking, although it can do some amazing things. The biggest deficiency I see right now is AI does not know how to ask questions and doesn’t know what it doesn’t know. It just makes assumptions and pretends like it knows what it is doing. People like this are some of the hardest ones to work with. They act like they know everything and never think they are wrong. Smart people, blind to their own weaknesses, make the subtlest, hardest problems to find.
Until AI is self-aware of its own weaknesses and knows how to ask questions, we cannot fully trust it to operate like a professional human. It needs watched very carefully. AI can still be very useful, and it will likely get better, but for now, I think this is the current state.
Silos occur when a person or group of people operate in isolation from the rest of the organization. One of the reasons you have failure in organizations is one silo does not know that the other silo is doing.
Silos are deadly in our organizations. How can we avoid them? The efficient way is to use open, transparent, and common systems where possible for:
Software developers build really good workflow tools. Why, because they can build their OWN tools. For this reason, no matter what your discipline, I recommend starting with Git*. For small teams, this can get you most of the way there. For larger teams, you may need something more comprehensive like odoo. But, start small with Git*, cover the basics, and add more complex systems when needed.
This is good general advice in life – don’t automatically distrust people and go looking for problems, but when something does not look or feel right, verify.
As engineers, when we are building something new, this is important.
Are our assumptions correct? Have we checked the edge conditions? Did this change break anything else? What are we missing?
As systems become more complex, two tools come to mind:
Simulation (a faster/cheaper way to verify) Automation (testing, deployment, etc. – do things humans don’t have time for)
In complex systems, blind trust does not get us very far, especially in ourselves. As engineers, the work we should verify the most is our own.