Becoming friends with Clojure protocols

I’ve been programming Clojure for several years, and yet I’ve managed to avoid protocols during all that time (I’ve also avoided macros, but that is another story). I found myself always having a colleague do the “dirty work” or some sad excuse as of why it wasn’t necessary right now. No more… this week I got my hands dirty.

For me, Clojure protocols solves the same problem, that I previously used interfaces in Java and PHP for: Dependency Injection (DI) and Iversion of Control (IoC). This kind of abstraction probably have several purposes, but I use it for being able to reason about a “service” without the knowledge of its implementation.

Having your services “hidden” behind a protocol will make it very pleasant to test functions that would normally require external access causing side effects (like API endpoints, database and queues). But it also ties well in with applications state management libraries like Mount and Component, when needing a “standin” for one of these external resources, e.g. for some manual testing in the REPL.

As soon as I dived into the example about protocols found on the Clojure website, I found that it was too superficial for someone like me. I’ve never approached programming very academically. For some unknown reason, most things with fancy words (polymorphism included) just refuse to stick to the inside of my skull until I see and feel it in action. My pleading for help was heard by Clojurian Slack, and after I understood (a bit more), I decided to create a more elaborate example, that maybe others would find useful.

The protocol (interface)

For a more realistic example than the one on the Clojure website, imagine some entity in a database with CRUD operations (Create, Read, Update & Delete):

(defprotocol EntityStore
  (create [this id] [this id initial-data])
  (fetch [this id])
  (save [this id data])
  (delete [this id]))

For the Read operation I choose to use a function named fetch (over get and read) and for the Update operation I use save (over update and replace). I think both fetch and save clearly describes the intention of the operation without conflicting with existing function names in clojure.core. The otherwise overlap of naming could confuse for developers, and at the same time the choice avoids linting warnings like … already refers to ….

Adding doc-strings prior implementation, will force you to evaluate the exact needs of your protocol in order to articulate them. I found myself finding errors in my design on several occasions during this:

(defprotocol EntityStore
  "All operations to the store are atomic (e.g. a DB implementation
   would use transactions or something similar)."
  (create [this id] [this id initial-data]
    "Creates a new entity in the store, and returns a map representing
     the new entity.")
  (fetch [this id]
    "Fetches (reads) an entity from the store or returns nil if it
     doesn't exist.")
  (save [this id data]
    "Saves (updates) an entity with the id `id` overwriting its data,
     returns a map representing the updated entity.")
  (delete [this id]
    "Deletes an entity with the id `id` from the store and returns
     nil."))

I decided to put the protocol definition in the namespace my-app.service.entity-store, because it would allow me to use it in the code like so:

(ns my-app.core
  (:require [my-app.service.entity-store :as entity-store-service]
            ...))

...
(let [entity-a (entity-store-service/fetch entity-store "id-for-A")
  ...

The service part of the NS, emphasizes that implementation details are “hidden away” on purpose, and I think entity-store-service/fetch read very well in the code.

Not having the protocol definition in the same namespace as where it is used, tricked me at first and caused the error: Unable to resolve symbol: <symbol name> in this context. It took me a while to figure out that methods defined using defprotocol “live” in the same namespace as the namespace where they are defined.

The (mock) implementation

I’m going to start a bit backwards with a mock of the entity store, because it will be simpler in the sense that it does not require any third party libraries and such to implement.

(ns my-app.service.impl.in-memory-entity-store
  (:require [my-app.service.entity-store :as entity-store-service]))

(defn create
  ([store-atom id]
   (create store-atom id {}))
  ([store-atom id data]
   (swap! store-atom assoc id data)))

(defn fetch
  [store-atom id]
  (get @store-atom id))

(defn save
  [store-atom id data]
  (swap! store-atom assoc id data))

(defn delete
  [store-atom id]
  (swap! store-atom dissoc id))

(deftype InMemoryEntityStore [store-atom]
  entity-store-service/EntityStore
  (create [_this id] (create store-atom id))
  (create [_this id data] (create store-atom id data))
  (fetch [_this id] (fetch store-atom id))
  (save [_this id data] (save store-atom id data))
  (delete [_this id] (delete store-atom id)))

A classic mistake to make at this point is to remove either create or save on the protocol, since the implementation is identical. But they are only identical (for now), because this mock is a very naive implementation. Also remember, the protocol should never know about the implementation details of the exposed functionality.

For convenience

Consider adding an extra convenience function in the “implementation” namespace (in above example: my-app.service.impl.in-memory-entity-store). Such a function allows you to avoid importing the class that deftype creates, which would otherwise require your code to look something like:

(ns my-app.core
  (:require [my-app.service.impl.in-memory-entity-store])
  (:import [my-app.service.impl.in-memory-entity-store InMemoryEntityStore]))

...

(InMemoryEntityStore. (atom {}))

Instead, add a function like new-store:

(ns my-app.service.impl.in-memory-entity-store
  ...

(defn new-store
  "Convenience function for creating an in memory entity store."
  [store-atom]
  (InMemoryEntityStore. store-atom))

Which would allow something like:

(ns my-app.core
  (:require [my-app.service.impl.in-memory-entity-store :as in-memory-entity-store]))

...

(in-memory-entity-store/new-store (atom {}))

Real implementation

The following NoSQL implementation using Monger, a Clojure client for MongoDB is also very naive: 😅

(ns my-app.service.impl.mongo-entity-store
  (:require [monger.collection :as mongo-document]
            [monger.core :as mongo]
            [my-app.service.entity-store :as entity-store-service]))

(def coll
  "Collection in which entities are stored in MongoDB."
  "entities")

(defn create
  ([db oid]
   (create db oid {}))
  ([db oid data]
   (mongo-document/insert-and-return db coll (assoc data :_id oid))))

(defn fetch
  [db oid]
  (mongo-document/find-map-by-id db coll oid))

(defn save
  [db oid data]
  (mongo-document/update-by-id db coll oid data))

(defn delete
  [db oid]
  (mongo-document/remove-by-id db coll oid))

(deftype MongoEntityStore [db]
  entity-store-service/EntityStore
  (create [_this id] (create db id))
  (create [_this id data] (create db id data))
  (fetch [_this id] (fetch db id))
  (save [_this id data] (save db id data))
  (delete [_this id] (delete db id)))

(defn new-store
  "Convenience function for creating a NoSQL entity store."
  [uri]
  (let [{:keys [db]} (mongo/connect-via-uri uri)]
    (MongoEntityStore. db)))

On the surface, the above solution looks fine and dandy, but it has (at least) one flaw. It requires that the id given through the protocol is a BSON ObjectId (MongoDB specific Java object). Though in-memory implementation using an atom would not complain about using ObjectId as lookup keys, it is often preferable to avoid bleeding DB specifics outside the protocol. The following three functions (hexify, pad & s->oid) is a somewhat hacky attempt to work around it and use strings instead (here be dragons 🔥🐉):

(ns my-app.service.impl.mongo-entity-store
  (:require [monger.collection :as mongo-document]
            [monger.core :as mongo]
            [my-app.service.entity-store :as entity-store-service])
  (:import [org.bson.types ObjectId]))

; Shamelessly copied from https://stackoverflow.com/questions/10062967/clojures-equivalent-to-pythons-encodehex-and-decodehex
(defn hexify
  "Convert byte sequence to hex string"
  [coll]
  (let [hex [\0 \1 \2 \3 \4 \5 \6 \7 \8 \9 \a \b \c \d \e \f]]
    (letfn [(hexify-byte [b]
              (let [v (bit-and b 0xFF)]
                [(hex (bit-shift-right v 4)) (hex (bit-and v 0x0F))]))]
      (apply str (mapcat hexify-byte coll)))))

;; Strongly inspired by https://stackoverflow.com/questions/27262268/idiom-for-padding-sequences
(defn pad
  [n val coll]
  (take n (concat coll (repeat val))))

(defn s->oid
  [^String s]
  (->> (.getBytes s)
       (pad 12 0xFF)
       (hexify)
       (ObjectId.)))

(def coll
  "Collection in MongoDB in which entities are stored."
  "entities")

(defn create
  ([db id]
   (create db id {}))
  ([db id data]
   (mongo-document/insert-and-return db coll (assoc data :_id (s->oid id)))))

(defn fetch
  [db id]
  (mongo-document/find-map-by-id db coll (s->oid id)))

(defn save
  [db id data]
  (mongo-document/update-by-id db coll (s->oid id) data))

(defn delete
  [db id]
  (mongo-document/remove-by-id db coll (s->oid id)))

(deftype MongoEntityStore [db]
  entity-store-service/EntityStore
  (create [_this id] (create db id))
  (create [_this id data] (create db id data))
  (fetch [_this id] (fetch db id))
  (save [_this id data] (save db id data))
  (delete [_this id] (delete db id)))

(defn new-store
  "Convenience function for creating a NoSQL entity store."
  [uri]
  (let [{:keys [db]} (mongo/connect-via-uri uri)]
    (MongoEntityStore. db)))

The above solution have the following advantages:

  • Mongo specific implementation (the ObjectId class) is entirely hidden behind the protocol (almost - I’ll get back to this).
  • There is no need to add extra indexes on the collection in the Mongo database, which using an alternative field would have strongly encouraged.
  • The CRUD functions are all simple because they can leverage ...-by-id-functions in the Clojure Mongo driver (Monger).

There are still a bit of Mongo hiding in the shadows because the id must be a string and only the first 12 bytes are considered for magically generating the ObjectId behind the scenes. Also, not being able to easily correlate the id "my-juicy-idA" with ObjectId("6d792d6a756963792d696441") is a bit of a bummer.

It might be possible to use UUID’s encapsulated in Mongo BSON Binary though, but that is outside the scope of this post.

The business logic

Leaving all the exiting challenges with Mongo behind and moving on…

An “Entity store service” is now available, which business logic can leverage oblivious to its implementation.

Consider the following code describing some super important business logic:

(ns my-app.core
  (:require [my-app.service.entity-store :as entity-store-service]
            [my-app.service.impl.mongo-entity-store :as mongo-entity-store]))

(def entity-store
  (mongo-entity-store/new-store "mongodb://admin:secret@172.21.0.2/customer1"))

(defn apply-business-logic
  [{:keys [entity-id id] :as _event}]
  (when-let [entity (entity-store-service/fetch entity-store entity-id)]
    (if-not (= (:name entity) "Donald Duck")
      entity
      (do ; Someone have been testing (again) - cleanup
        (entity-store-service/delete entity-store entity-id)
        nil))))

The code in apply-business-logic, doesn’t care if entity-store is of the type MongoEntityStore or InMemoryEntityStore. This is very useful for testing, among other things.

Tests (using the mock)

Notice how the following test allows testing of apply-business-logic without having a database available during testing, or preparing test data in the database (and cleaning data in the database afterwards).

(ns my-app.core-test
  (:require [clojure.test :refer [deftest is testing]]
            [my-app.core :as sut] ; System Under Testing
            [my-app.service.impl.in-memory-entity-store :as in-memory-entity-store]))

(deftest apply-business-logic
  (testing "Normal entity"
    (with-redefs [my-app.core/entity-store
                  (in-memory-entity-store/new-store
                    (atom {"123" {:name "John Doe"}}))]
      (is (= {:name "John Doe"} (sut/apply-business-logic {:entity-id "123"})))))
  (testing "Bad entity"
    (let [store-atom (atom {"123" {:name "Donald Duck"}})]
      (with-redefs [my-app.core/entity-store
                    (in-memory-entity-store/new-store store-atom)]
        (is (contains? @store-atom "123"))
        (is (nil? (sut/apply-business-logic {:entity-id "123"})))
        (is (not (contains? @store-atom "123"))))))
  (testing "Unknown entity"
    (with-redefs [my-app.core/entity-store
                  (in-memory-entity-store/new-store (atom {}))]
      (is (nil? (sut/apply-business-logic {:entity-id "non-existing"}))))))

The above code can be found on GitHub.

This post is getting long… so before I even make the stubborn and enduring people tired, I will stop with:

Protocols are your friend (that maybe you just need to get to know). 💜

Stop micromanaging your code

This rant is about a bad habit some developers pick up and seem to have a hard time ditching again… even after gaining lots of experience.

I guess it is to be expected. After having been burned one too many times by missing error handling, in the software they work on, they become overprotective. But it often overcomplicates the code and leave room (extra lines of code) to place “a fix”, where “the fix” does not belong. Of course, there are plenty of gray areas, murky waters and personal opinions of … exactly where to slice the cake.

The “empty list” is a great example of overprotective code.

// PHP
function executeActions($actions) {
  if (is_empty($actions)) {
    return;
  }
  else {
    foreach ($actions as $action) {
      execute($action);
    }
  }
}

Clojure equivalent:

(defn execute-actions
  [actions]
  (when-not (empty? actions)
    (doseq [action actions]
      (execute action))))

But it could be even worse 😅

// PHP
function executeActions($actions) {
  if (is_empty($actions)) {
    throw new Exception("Actions cannot be empty");
  }
  else {
    foreach ($actions as $action) {
      execute($action);
    }
  }
}

Clojure equivalent:

(defn execute-actions
  [actions]
  (if (empty? actions)
    (throw (ex-info "Actions cannot be empty"))
    (doseq [action actions]
      (execute action))))

To fully understand the implications this code introduces, you have to put yourself in the shoes of whoever calls executeActions. Picture how the code will look on their end. The caller will need to know that empty lists are not accepted (which I don’t think a type system can nor should protect you from):

// PHP
$actions = generateActions();

if (! is_empty($actions)) {
  executeActions($actions);
}

Clojure equivalent:

(let [actions (generate-actions)]
  (when-not (empty? actions)
    (execute-actions actions)))

Often the special case pictured above is avoidable for both caller and callee of the function. I would argue that the following code is much simpler to reason about:

// PHP
function executeActions($actions) {
  foreach ($actions as $action) {
    execute($action);
  }
}

...

$actions = generateActions();

executeActions($actions);

Clojure equivalent:

(defn execute-actions
  [actions]
  (doseq [action actions]
    (execute action)))

...

(let [actions (generate-actions)]
  (execute-actions actions))

Usually, it will not hurt looping over an empty list. When it does hurt, consider if error handling has been placed correctly.

💡 Hint: Checking for empty lists should often have been done much earlier than right before the loop (fail early).

Considering “empty lists” as an error, usually falls in the category of “business errors”. Postponing the “empty check” until right before the loop, can be a sign that business logic is placed wrongly (too late).

I urge you to think twice when you are tempted to a do an “empty check”. Stop micromanaging your code - learn to let go.

Hosting ClojureScript SPA using Shadow-cljs on Netlify

Building ClojureScript Single Page Applications (SPA) on Netlify just works. The following instructions require no prior knowledge of neither Netlify nor build tools (like Shadow-cljs, WebPack etc.), but some knowledge about HTML and Git is expected.

Setup a ClojureScript SPA project

SPA projects come in all shapes and sizes, causing equal diversity in paths for compiled code and build commands. Most of the time, differences are small even subtle. But for an automatic build service (including Netlify) these things needs to be exactly right. The following description assumes a ClojureScript project setup matching the Shadow-cljs “Quick Start” guide (snapshot from Feb. 17. 2022 - Shadow-cljs v2.17.3).

The result of the “Quick Start” guide summed up:

  1. Run npx create-cljs-project <project-name> to create a new ClojureScript project
  2. Add “Hello World” code
  3. Setup a Shadow-cljs build named frontend. The same name is used in the examples below
  4. Add index.html page that uses the “Hello World” code

Notice: The link to the “Quick Start” guide is a snapshot to ensure the instructions found on this page will remain correct. But do checkout the newest version of the guide as well.

Wrap up the project by pushing the code to a Git repo (Netlify supports GitHub and GitLab among others.)

Setup Netlify

From Netlify’s “Team overview” click Add new site, select Import from existing project and choose the repository and branch created above.

While npx shadow-cljs watch frontend is used for that neat “Live update” local developer experience. All the build optimizations are reserved for “Releasing”, which means a Netlify production build would require the following as its Build command:

npx shadow-cljs release frontend

For more information about compile, watch and release builds, see Basic Workflow. As the project advances (i.e. includes CSS), it is very likely that the “script” section of package.json should be leveraged instead of calling shadow-cljs directly.

By default, Shadow-cljs puts all JavaScript files it builds in the public/js directory. Files for publishing needs to stay in the same directory, which is why the “Quick Start” guide gave detailed instructions for index.html content and location. Netlify requires the following as its Publish directory to match the SPA project setup:

public/

Often several things need to be aligned to get the publishing directory content just right. Luckily, the above setup is simple 😅

Netlify should now look something like this:

Screenshot of build settings in Netlify

Now deploy the SPA and open it in a browser. It will just be a blank page, but in the Browser Console Hello World will be printed.

Screenshot of browser with console open showing cljs SPA

Put the URL in control

For more advanced Single Page Applications, it’s normal to use a routing library (e.g. Reitit) and put the URL in control of which “page” is being shown. While staying in the comfort of the local developer experience, this just works because the Shadow-cljs webserver will…

by default … serve all static files from the configured paths, and fall back to index.html when a resource is not found

Source: Shadow-cljs User Guide

But when the SPA is published on Netlify this is no longer the default behavior.

Luckily, it is easy to add “Rewrite rules” for a Netlify app by adding a file in the Publish directory (public/_redirects) with the following content:

/* /index.html 200

See Netlify documentation about “Rewrites and proxies”.

Now there is no excuse for that awesome ClojureScript SPA not being available online 😉

Heroicons from ClojureScript

Update 2022-02-20: Requiring single icons using :refer causes ALL Hero icons to be included in builds (even optimized). Instead, use :as (multiple times). Examples below have been updated.

In my opinion, Clojure and ClojureScript is lacking in the documentation department, especially when it comes to integrations with things outside the Clojure ecosystem.

I want to share how using Heroicons from within a ClojureScript Reagent (including Re-frame) application works. Some might find the following obvious. But if you (like me) aren’t using ClojureScript for frontend development every day… it might not be.

Which is sad because it is actually quite easy… when you already have Shadow-cljs configured 😎

First install it (just like described in the documentation):

npm install @heroicons/react

Now imagine already having a reagent component named my-component-missing-icon:

(ns my-app.core
  (:require
   [reagent.core ...] ; The three dots (...) means there is more
   ...))              ; but it isn't important for the example.

...

(defn my-component-missing-icon [] ; Shamelessly copied from Reagents website
   [:div
   [:p "I am a component!"]
   [:p.someclass
    "I have " [:strong "bold"]
    [:span {:style {:color "red"}} " and red "] "text."]])

...

Require a specific icon from @heroicons/react/solid and insert it in the component (e.g. the icon CheckIcon):

(ns my-app.core
  (:require
   ["@heroicons/react/solid/CheckIcon" :as CheckIcon] ; <- new stuff
   [reagent.core ...]
   ...))

...

(defn my-component-with-icon [] ; Component now slightly changed.
  [:div
   [:p "I am a component!"]
   [:> CheckIcon {:style {:height "5rem" :width "5rem"}}]
   [:p.someclass
    "I have " [:strong "bold"]
     [:span {:style {:color "red"}} " and red "] "text."]])

Notice :> which means “creating a Reagent component from a React one.”

The “having to convert a React component to Reagent” was the thing I was missing.

If Tailwind is thrown into the mix, it will be possible to style Heroicons like [:> CheckIcon {:class "h-5 w-5"}].

If build sizes are a concern, then avoid using :refer which otherwise could seem like the obvious way to reference multiple icons.

DON’T DO THIS - builds will include ALL icons from @heroicons/react/solid:

(ns my-app.core
  (:require
   ["@heroicons/react/solid" :refer [ChatIcon CheckIcon]] ; NOT optimizeable
   ...

To allow build tools (like Shadow-cljs) to properly optimize builds instead require icons individually, even though it is more verbose:

(ns my-app.core
  (:require
   ["@heroicons/react/solid/ChatIcon" :as ChatIcon]
   ["@heroicons/react/solid/CheckIcon" :as CheckIcon]]
   ...

Now go enjoy the JavaScript and React ecosystem from the comfort of ClojureScript.

Symptoms of lacking software quality

What makes up good software quality is different from person to person, making it somewhat subjective and hard to define measurably.

It is discouraging working on projects where you feel that every time the team fixes a bug, it introduces two more. Having a hard time tracking down the bug and understand how it hit production in the first place makes matters worse. Unable to confidently give assurance that it will not happen again is downright frustrating.

Having been through a few different projects myself, I noticed that some symptoms seem to reappear.

Symptom 1 - None to too few test cases

As a developer, having automatic tests makes you feel more confident making changes to the codebase, knowing there is a safety net that reduces the risk of introducing bugs. A codebase with few automatic test cases can indicate that the task of writing tests is complicated. Test complexity is usually proportional to the complexity of the code subject to testing.

Code complexity should NOT be confused with complex business logic. You can have straightforward code with very complex business logic. In my experience, code complexity increases over time, mostly because the code tends to become harder and harder coupled. Coupling is just what happens by default without careful consideration of every new feature and bug fix.

Demanding a high degree of test coverage of an overcomplicated codebase is going to VERY time consuming and very likely not worth the time investment. Instead, focus on how to simplify (decouple) code.

I find it an excellent strategy to concentrate the side-effects i.e., in specific functions and namespaces. Simple code will beg for test cases.

Symptom 2 - Loads of build/runtime warnings

On the one hand, a warning in itself is not a problem. It is just a warning. But warnings are noise in which a new (maybe important) warning can hide. On the other hand, suppressing all warnings isn’t a solution either, as you might miss when that critical warning suddenly shows up.

We should not treat warnings superficially - say, solely by running the software and concluding “hey - it still seems to work”. A warning is often about some special case that will trigger unwanted behavior. Understand why the warning is there. Evaluate if it could have an impact either now or in the future. Make keeping a warning around a well-considered choice and document your reasoning. Such documentation could be as simple as a comment by the line of code that introduced the warning. The comment should include the warning itself, so it is easy to search for (i.e., in the future, when you have forgotten or a new team member doesn’t understand why).

This approach is not about having a religious “zero warnings” policy. Warnings describe potential problems, and ignoring them will deprive you of the opportunity to make a deliberate choice. Furthermore, having too many warnings will hide new warnings in the noise. Find the right balance.

Symptom 3 - Sparse code documentation

Lots of languages allow for documentation as part of the code. Simply demanding this documentation to be present does not solve the problem. I’ve seen lots of documentation that repeats a function name or repeats what the function does line by line. You don’t want that. Documentation is HARD.

It is worth repeating… Documentation is HARD.

Good documentation isn’t just a long term investment, “only” useful for developers somewhere in a distant future. For writing documentation like JavaDoc, Clojure doc-strings, etc., I’ve experienced that I need to understand things more thoroughly to articulate the meaning for somebody else. By immersing myself, I found a better solution or identified a problem while writing documentation. It is almost like “rubber ducking”, except you do it when you think you got it all figured out.

Symptom 4 - Poor commit hygiene

Fix bug and Fix bug for realzies this time are just outright useless commit messages. Poor commit hygiene signals reviewing and bug-hunting as low priorities. Well-formed commits allow for a smoother review process. Bug hunters are now so much better off because the commit history does provide them context. Temporarily fixing a bug might be as easy as reverting a commit, which would not be possible with poor commit hygiene, such as squeezing multiple changes into a single commit.

You help your team and yourself when providing quality commit messages. I will use this opportunity to direct your attention to Chris’ excellent guide: How to Write a Git Commit Message.

Remember that the commit history IS NOT ABOUT YOU (developers come and go), but about how the software evolved. Six months from now, nobody cares that you correct code based on review feedback. Imagine having baked a great batch of cookies; nobody is interested in the two failed attempts that got thrown away. What is interesting is the “recipe” to make those cookies (assuming you came prepared).

I strongly encourage sanitizing your commits via “rewrites” in a feature branch because it allows describing how the software evolved. Rewriting commits leaves behind all detours and dead ends (usually as comments in the PR). “Lessons learned” and other things worthwhile remembering belong in the documentation or as comments in the code - never in a commit message or worse: Somewhat derivable from the “commit history.”

Symptom 5 - Poor review process (LGTM)

A good review process can help to reduce “poor commit hygiene.” But the review process itself adds so much more value:

  • the debate on a solution is a learning tool for submitter, reviewer, and readers snooping in,
  • it can assist with finding bugs early, and
  • it increases code quality i.e., with better function naming and more understandable documentation, etc.

I’ve seen loads of LGTM (read: Looks Good To Me) approvals on PR’s, and when that is the norm, it signals the team doesn’t take the review process seriously enough.

Angie Jones (@techgirl1908) did a superb blog post (The 10 commandments of navigating code reviews) on how to embrace code reviews.

Conclusion

Clinging to easy to measure metrics like test and doc coverage, enforcing linting or rules about commit messages (length and line breaks), etc., can not and will never guarantee high-quality software on its own.

The common denominator of all the symptoms above is that they usually appear when we cut corners, not taking the required time.

The actual code base is only a small piece in the big puzzle of software quality. A change in processes like planning and review, communication, documentation, available tools, tooling usage, culture, and prioritizing time can significantly affect the quality of software, both for better and worse.

Avoid symptom treatment. Look for the real problem.