Sure, but when applying "simple is robust" principle it is extremely important to understand also intrinsic complexity. Not handling edge-cases etc does not make for robust code, no matter how much simpler it is.
This is where the advice in the article is excellent.
If you start with code that's easy to delete, it's often possible to alter your data representation or otherwise transform the problem in a way that simply eliminates the edge cases. With the result being a design that is simpler by virtue of being more robust.
If you start with code that's hard to delete, usually by the time you discover your edge and corner cases it's already too late and you're stuck solving the problem by adding epicycles.
Yes, but I definitely also see the opposite quite a bit: Somebody several layers down thought that something was an edge case, resolved it in a strange way, and now you have a chain of stuff above it dealing with the edge case because the bottom layer took a wrong turn.
The most common examples are empty collections: either disallowing them even though it would be possible to handle them, or making a strange choice like using vacuous falsity, i.e.
all [] == False
(Just for illustration what I mean by "vacuous falsity", Python's all correctly returns True).
Now, every layer above has to special-case these as well, even if they would be a completely normal case otherwise.
Your example perfectly illustrates oversimplification: attempt to stuff categorical variable into another of lower order. If a language has absence of value available as an expressible concept (nullability), then a list is at least 3-way categorical variable: absence of value, empty list, non-empty list. Any attempts to stuff that into a binary truthy value will eventually leak one way or another.
Example: screenshot. X11: "please tell me the pixels in the root window". Wayland: "please tell me the extension number of the portals extension so I can open a portal to pipewire so I can get the pipewire connection string so I can connect to the pipewire server so I can ..."
Example: get window position on screen.
Example: set window title.
X11 is a protocol about managing windows on a screen. Wayland is a protocol about sending pixel buffers to an unspecified destination. All the screen-related stuff, which is integral to X11, is hard to do in Wayland with a pile of optional extensions and external protocol servers which do not interact.
X11 is also more standardized, de facto, because there are fewer server implementations, which is not just an accident but is by design.
X11 is far more inclined towards the idea of clean separation of policy and mechanism, which I think is becoming more and more evidently correct across the board of programming. When you start talking about libraries and layers, a policy/mechanism split is part of how to write layered code correctly:
base mechanisms that interpret the raw problem correctly (e.g. pixels on a screen, mouse position)
-> some policy that is in some ways mechanism with slightly more abstraction (e.g. drawing shapes)
-> some more policy-mechanism abstraction (e.g. windows)
...
until you get to your desired layer of abstraction to work at. This goes along with modularity, composition, code reuse. X11 itself has many design flaws, but Wayland's design is untenable.
X11's separation of policy and mechanism was a mistake. Maybe it made sense at the time - I don't know. GUIs were new at the time. Now that we know how they're supposed to work, the flag should really be called "I am a window manager" rather than "root window substructure redirect", and "I am a special window" (e.g. combobox drop-down) rather than "ignore substructure redirect" for example. (Even better, define some kind of window class flag so the window manager CAN see it and knows it's a combo box drop-down).
> and "I am a special window" (e.g. combobox drop-down) rather than "ignore substructure redirect" for example. (Even better, define some kind of window class flag so the window manager CAN see it and knows it's a combo box drop-down).
I think X11 has had that for a very long time. In the late 2000s when Beryl was still separate from Compiz, it was almost trivial to target things like dropdowns by a descriptive name and give them different effects than regular windows. Mine had an accordion effect while windows would burn up.
My point is that X is in the right direction more than Wayland is, in the spirit of its design, and major pain points of X are largely due to its specific design/implementation. Perhaps an outgrowth of having a lot of tiny policy-mechanism components is lack of standardization, which did strike X, but I think that's an orthogonal concern and not better served by overly large, inflexible components.
There will always be edge cases, and yes they will make the code more complicated, but what really helps is automatic testing to make sure those edge cases don't break when making changes.
It doesn't have to be difficult, for example when developing user interfaces I have a secret key combo for triggering the latest test, and another for running all tests. And I make mock functions that will trigger user interaction automatically. I inline the tests so they are next to the code being tested. I also place them inside comments so I can regexp remove the tests for the release because I don't want my program to be more then two MB, but if you don't care about size you could just leave the tests there so that they can be triggered by users in a prod environment as well. The problem with modern development is that the frameworks makes everything more complicated. Just ditch the leaky abstractions and increase your productivity 100x
DRY is a terrible, terrible, principle because it’s correct but requires programmers to make this decision. Which they won’t because DRY has thought them that all duplication is bad. The flip-side is what you’re saying, where there are simply things it wouldn’t make sense to duplicate. I’m a strong advocate against basically every Clean Code principle, really anything, which isn’t YAGNI. That doesn’t mean I think you should create datetime services every time you need them. It doesn’t mean I don’t think you should make a “base” audit mixin/abstract when you want to add “created_at”… to your data model in your API.
I think a better way to look at it than “third time - consider refactor” is to follow this article and ask “will this ever need to be extended?”. If the answer is yes, then you should duplicate it.
This way you won’t get a flying dog in your OOP hellscape but you also won’t have to change your holiday service 9 million places when your shitty government decides to remove one of them (thanks Denmark). Between the two, I would personally prefer working on the one where I have to do the 9 million changes, but I would obviously prefer neither.
> Knowing when two things are one thing or one thing is two things is most of our job, right?
Yes, but often we don't know the domain enough but "this feature must be available yesterday". So add tests, copy, release. And when you have to either do it again or have to update this code and its original you should know more and be able to refactor and give good names to everything.
Everything in balance. While I agree with this philosophy, I've also seen lots of duplicate bugs because it wasn't realized there was two copies of the same bug.
Agreed! I'll usually go one step further for early projects and lean towards 3rd time copy, 4th time refactor.
Example: So much early code is boilerplate CRUD, that it's tempting to abstract it. 9 times out of 10, you'll create a quasi-ORM that starts inheriting business logic and quickly grows omni-functions.
Eventually you may actually need this layer, assuming you're system miraculously scales to needing multiple services, datastores, and regions.
However this doesn't just apply to the obvious, and you may find omni-logic that made a feature more simple once and is currently blocking N new features.
Code is cheap, especially today. Complexity necessarily constrains, for better or worse.
Hence why I am rather looking if two pieces of code change together, opposed to just looking the same.
If I need to introduce the same feature in multiple places in roughly the same way, that's a decent indication code wants to be the same and wants to change together. That's something to consider extracting.
Fixing the same bug in several places is a similar, but weaker indication. It's weaker, because a bug might also occur from using a framework or a library wrong and you do that in several places. Fixing the same business logic error in several places could mean to centralize some things.
+1, but I'm not sure if the "simple is robust" saying is straightforward enough? It opens up to discussion about what "simple" means and how it applies to the system (which apparently is a complex enough question to warrant the attention of the brilliant Rich Hickey).
Maybe "dumb is robust" or "straightforward is robust" capture the sentiment better?
The usual metric is complexity, but that can be hard to measure in every instance.
Used within a team setting, what is simple is entirely subjective to that set of experiences.
Example: Redis is dead simple, but it's also an additional service. Depending on the team, the problem, and the scale, it might be best to use your existing RDBMS. A different set of circumstances may make Redis the best choice.
Note: I love "dumb is robust," as it ties simplicity and straightforwardness together, but I'm concerned it may carry an unnecessarily negative connotation to both the problems and the team.
Indeed, simple is not a good word to qualify something technical. I have a colleague and if he comes up with something new and simple it usually takes me down a rabbit hole of mind bending and head shaking. A matter of personal perspective?
Is my code simple if all it does is call one function (that's 50k lines long) hidden away in a dependency?
You can keep twisting this question until you realize that without the behemoths of complexity that are modern operating systems (let alone CPUs), we wouldn't be able to afford the privilege to write "simple" code. And that no code is ever "simple", and if it is it just means that you're sitting on an adequate abstraction layer.
So we're back at square one. Abstraction is how you simplify things. Programming languages themselves are abstractions. Everything in this discipline is an abstraction over binary logic. If you end up with a mess of spaghetti, you simply chose the wrong abstractions, which led to counter-productive usage patterns.
My goal as someone who writes library code is to produce a framework that's simple to use for the end user (another developer). That means I'm hiding TONS of complexity within the walls of the infrastructure. But the result is simple-looking code.
Think about DI in C#, it's all done via reflection. Is that simple? It depends on who you ask, is it the user or the library maintainer who needs to parametrize an untyped generic with 5 different type arguments?
Obviously, when all one does is write business logic, these considerations fall short. There's no point in writing elegant, modular, simple code if there's no one downstream to use it. Might as well just focus on ease of readability and maintainability at that point, while you wait for the project to become legacy and die. But that's just one particular case where you're essentially an end user from the perspective of everyone who wrote the code you're depending on.
Can’t upvote enough. Too much dogshit in software is caused by solving imaginary problems. Just write the damn code to do the thing. Stop making up imaginary scaling problems. Stop coming up with clever abstractions to show how smart you are. Write the code as a monolith. Put it on a VM. You are ready to go to production. Then when you have problems, you can start to solve them, hopefully once you are cash positive.
Why is your “AirBnb for dogs” startup with zero users worrying about C100K? Did AWS convince you to pay for serverless shit because they have your interests in mind, or to extract money from you?
I am not sure on that. But I am certain the article Amazon published on cutting AWS bill by 90% by simplifying juvenile microservices to a dead simple monolith was deleted on accident.
Yep, to recycle a brief analysis of my own youthful mistakes:
____
I've come to believe the opposite, promoting it as "Design for Deletion."
I used to think I could make a wonderful work of art which everyone will appreciate for the ages, crafted so that every contingency is planned for, every need met... But nobody predicts future needs that well. Someday whatever I make is going to be That Stupid Thing to somebody, and they're going to be justified demolishing the whole mess, no matter how proud I may feel about it now.
So instead, put effort into making it easy to remove. This often ends up reducing coupling, but--crucially--it's not the same as some enthusiastic young developer trying to decouple all the things through a meta-configurable framework. Sometimes a tight coupling is better when it's easier to reason about. [...]
> So instead, put effort into making it easy to remove.
You might, but there's also going to be other people that will happily go ahead and create abstractions and logic that will form the very core of a project and entrench themselves to such a degree that they're impossible to get rid of.
For example, you might stumble upon CommonExcelFileParser, CommonExcelFileParserUtilities, HasExcelParseStatus, ProductImportExcelParser, ProductImportExcelParserView, ProductImportExcelParserResultHandler and who knows what else, the kind of stuff that ends up being foundational for the code around it, much like how if you start a front end project in React or Angular, migrating to anything else would be a Sisyphean task.
In practice, that means that people end up building a whole platform and you basically have to stick with it, even though some of the choices made might cause bunches of problems in the future and, due to all of the coupling, refactoring is way harder than it would be in an under-abstracted codebase.
I'm not sure what to do then. People seem to like doing that more than applying KISS and YAGNI and making code easy to delete.
Not my originals, and I cannot recall who said this... But it's completely on point
* Software has a tendency to become maximally complex. You either have an actually complex domain, or the developers will find a way to increase the complexity (..because otherwise, they're bored)
* Good software is modular and easy to remove. Consequently, good software will keep getting replaced until it's bad and cannot be removed anymore
Dealing with precisely this right now. Written by a consultant who I, maybe uncharitably, suspect is trying to ensure his job security. At this point, it is harder to even understand what's going on behind layers of handlers, factories and handler factories, forget about removing things. It works though and so no one wants to stick their neck out and call it out for the fear of being labelled "not smart".
It still depends. Business line application yes and 10x yes. It will change it will move, don’t try to foresee business requirements. Just write something that will be easy to replace or throw away.
Frameworks and libraries not really, for those you still have to adjust to whatever happens in the world but at much saner pace.
Biggest issue is when devs want to write “framework” when they work on business line application where they have frameworks that they are already using like Rails/Asp.Net etc.
I would say the biggest issue are the frameworks themselves: they practically force you to fit your code to their architecture, and before you know it, your logic is split across innumerable classes. Laravel (with which I have the most experience) has models, controllers, views, service providers, data transfer objects etc. etc. - that makes it (arguably) easier to write and extend code, but very hard to refactor/delete.
> Business line application yes and 10x yes. It will change it will move, don’t try to foresee business requirements. Just write something that will be easy to replace or throw away.
This is correct, but from my experience of working in the same company for over a decade: You'll learn to foresee requirements. Especially the "we'll never need that" ones that become business critical after a few months/years...
Like the path that starts with a "simple" system of "soft deletes" for Foo records, which progresses through a period of developer-assisted "restores" or merges, and then they want even older info, and to make reports...
However it would have all been so much easier if they'd realized their business domain called for "Foo Revisions" in the first place.
Pretty wild that none of this talks about testing or observability. Tests are also something that you need to pay to maintain, but they give the ability of reducing the risk that you broke something when you removed it. Additionally when you've exposed your service to potential external callers you need to both have a robust way of marking some calls as deprecated, to be deleted as well as observing whether they are still being called and by what.
I recently did our first semi-automated removal of exposed graphql resolvers, metrics about how often a given resolver was already available so parsing that yielded the set of resolvers I *couldn't* delete. Graphql already has a deprecated annotation, but our service didn't handle that annotation in any special way. I added observability to flag if any deprecated functions have been called & then let that run for sufficiently long in prod, then you can safely delete externally exposed code.
This is going to be a bit of an oversimplification but when you build things that are easy to delete, then you’re not going to cause unintentional bugs when you delete them. It’s once you over complicate things that everything becomes an interconnected mess where developers don’t know what sort of impact changes will have. There are a lot of ways to fuck this up of course. Maybe you’re following some silly “best practice” principle, maybe you’re doing “micro-services” in a manner where you don’t actually know who/what consumes which service. But then you’ve not build things that are easy to delete.
I think external consumption as you frame it is a good example of this. It’s fair to give consumers a reasonable warning about the depreciation of a service, but if you can’t actually shut it off when you want to, then you’ve not designed your system to let things be easily deleted.
Which is fair. If that works for you, then so things that way. I suspect it may not work too well if you’re relying on tests and observations to tell you if things are breaking. Not that I have anything against tests, but it’s not exactly a great safe-guard if you have to let them tell you if you broke something in a long complicated chain. Not least because you’re extremely unlikely to have test-coverage which will actually protect you.
If you write so many lines of code then you can expect some other number of lines of tests. If you delete some of the code, you may be able to delete some of the tests. The point is that you can talk about just the code like TFA does and assume related impact on tests. TFA not saying anything about tests does not let us assume that TFA means that one should not write tests.
> To write code that’s easy to delete: repeat yourself to avoid creating dependencies, but don’t repeat yourself to manage them. Layer your code too: build simple-to-use APIs out of simpler-to-implement but clumsy-to-use parts. Split your code: isolate the hard-to-write and the likely-to-change parts from the rest of the code, and each other. Don’t hard code every choice, and maybe allow changing a few at runtime.
My experience is that the title doesn't hold. Code that is easy to delete is -- more often than not -- also easy to extend because it is layered, modular, and isolates different pieces through abstractions like interfaces or other type contracts.
Personally, I split code into two parts. The business logic and actually implementation. The business logic may be duplicated due to its nature, but it should not have too many duplicated technical details in it. While the business logic can be as shitty as you want as long as you do not handle business logic directly in it and keep it application independent. In that way. If you know things messed up and don't go too well. You have the option to wipe the implementation as a whole instead of forced to fix it and try to find out the actual spec from implementation.
> The problem with code re-use is that it gets in the way of changing your mind later on.
This is simply incorrect, especially in the generality in which it is stated. If you change your mind and the code was copy-pasted to ten places, then you have to change ten places. On the other hand, if the code is in a function, then you only need to change it once. And if you do find that one of the ten invocations should not be changed, then you can still copy-paste - or make the function more general.
Like crossing a street without looking, copy-pasting is almost always a bad idea.
In my experience, bad copy pasted code results in an annoying afternoon of tech debt repayment and fixes. Badly abstracted code results in months of tech debt repayment.
Of course, the answer is “don’t make bad abstractions”, but we all know how that one goes with a team and changing product reqs.
If only that were the case on a project at work. The badly copy pasted code has diverged over the years so you have 10 different versions of the same looking code that individually have differing edge cases, half of them by mistake because they forgot about the other 9.
I would trade that for one piece of mediocre abstracted code any day.
Oh yeah and everything in the codebase is copy and pasted.
Many times the code is reused in places where it is the correct code, so then you when you change it you have to slow down and split those places up. We have a git submodule of common UI widgets, changing one of those is impossible now, easier to copy the component into the project and change it locally. It's a problem! The "shared code" needs to be as minimal as possible because the sharing makes it harder to change.
> If you change your mind and the code was copy-pasted to ten places
The author would probably argue that you should have moved that code to a module / function.
Superficially, they contradict themselves on the topic. When read slowly, they use copy-paste as a way to indicate what code should be abstracted, and what really is a pattern to follow.
> On the other hand, if the code is in a function, then you only need to change it once. And if you do find that one of the ten invocations should not be changed, then you can still copy-paste - or make the function more general.
Ah yes, but what happens if you have to change 3 of the function invocations in one way, 5 in another, and the other two need to be completely rewritten because those aren't even using the same abstraction any more?
If it's all in one function, most developers will try to change that function to make all 10 cases work, when it should never have been one function in the first place.
It is much much easier to fix ten copy-paste places than to untangle a knot that should never have been tied, once it's holding pieces of your system together.
In a many cases I'd still
rather have three or more versions of a function, many which may just be very thin shims to accommodate that scenario than 10 copy/pastes of variations. Or shim at the call site and keep one function if that suits.
This is such a strange argument. You want to copy and paste code 10 times rather than making a function, because if the requirements change and if the person assigned to fix it is a moron, then it might prevent the moron from choosing one specific way of making a mess?
You can't prevent future morons from doing moronic stuff in the future. They'll just find another moronic thing to do.
it's crazy how we keep going through all those injunctions (religions) about software, they all look amazing on paper, feel like common sense and yet 50 years in, software is garbage 90% of the time
yet, we keep bringing this stuff up like it's some sort of genius insight / silver bullet
I don't think it's the case, because all those schools of thought (your DRY, your SOLID, your DDD, etc) all have opposite schools of thought rife with other similarly popular mantras
the problems in engineering rarely stem from the lack of principles and have way more to do with mismanaged projects, arbitrary deadlines, shifting priorities, unreliable sources of data, misunderstood business logic and all those fancy acronyms, all the SCRUM and agile in the world will never make up for all that
That's really not been my experience when reviewing code. Bad code I've seen has been due to misusing language features, not knowing the principles in these articles, or misunderstanding the principles or blanket applying them to everything.
For example, abstracting every piece of similar code to make it "DRY" because they don't understand that it's about concepts not code.
At the risk of turning a unison into a chord, here's my two cents.
If:
1. You know where the 'creases' of orthogonality are. You've carved the turkey 1000 times and you never get it wrong anymore.
2. As a result, there is hardly any difference in complexity between code that is and isn't easy to extend.
Then write code that is easy to extend, not delete.
The question is whether your impression of the above is true. It won't be for most junior developers, and for many senior ones. If orthogonality isn't something you preoccupy yourself with, it probably won't be.
In my experience, the most telling heuristic is rewriting propensity. I'm talking about rewriting while writing, not about refactoring later. Unless something is obvious, you won't get the right design on the first write. You certainly won't get the correct extensible design. If you're instructed to write it just once, then by all means make it easy to delete.
Here's an algebraic example to keep things theoretical. If the easy to delete version proposed by the article is:
f(x) = 6x^2 - 5x + 1
The prospective extensible version is:
g(x,a,b) = ax + b
f(x,q()) = q(x,3,-1) q(x,2,-1)
f(x,g)
It's the generalization for factorable polynomials. It's clearly harder to read than the easy to delete version. It's more complex to write, and so on.
However, it's algebraically orthogonal. It has advantages in some cases, for instance if you later add code for a 6th-order polynomial and need to use its zeroes for something else.
We know that it could be better in some cases. Is it a good bet to predict that it will be better overall? The problem domain can fracture across a thousand orthogonal "creases" like this one. The relevant skill is in making the right bets.
Here's an example that's not orthogonal. Let's say we think the 6 coefficient might be more likely to change in the future:
g(x,a) = ax^2 - 5x - 1
f(x,q()) = q(x,6)
f(x,g)
This version is most likely just adding complexity. A single function is almost always a better bet.
Once you can load up a full codebase into an LLM I'm hoping the cost to update client code is significantly reduced. Then you could focus on evolving the design without all the grunt work.
I'm also betting on this, that one day I'll be able to dump a codebase into an LLM and it will clean up the code. Not rewrite it, not restructure it, just clean it up. Remove unused code and comment it sensibly. Maybe also suggest some tests for it and implement them separately.
Comments should be based on intention. If I, as the programmer, am writing a piece of code and feel like there's something that I need to communicate about my intention in writing this, then I should. But if it's just surface level analysis, comments are just noise most of the time.
Copilot already does this, at least for individual chunks of code (and text, for that matter). Not for a whole codebase, but I think that's going to be a matter of time.
I am not sure that is something that applies 100%, but I understand the concern.
It is my understanding that we should try to build solutions to current problems, and be open to future use cases that could involve small additions in functionality.
It would be stupid to design an unmodifiable system just because some parts can be deleted and we are not sure what future needs are. Code should always be easy to extend, in my opinion.
Conversations like this are always difficult to discuss at a high level because the way we implement the words we use can be very different. Code can be written in a way that a lot of complexity is added in order to make it extensible, or it can be written in a way where simplification is used to make it extensible. Both authors would agree that extensible is good.
Specifically, write tests that identify disposable code. More specifically, you hopefully wrote some disposable code that is a modular extension of something close to the core. Write tests that demonstrate which of those deserves to be core, and which is necessary for a requirement but disposable. Since the article brings up shared apis, hopefully when you arrive on a new project, those are well understood as requirements paired with test cases. Repeat in the opposite direction in dependent projects.
I worked with someone once that so adamantly followed every quick tip like this that they heard to such an extreme level that now they all make me feel sick.
Implementing choice is superior. Not only can your program be capable of more actions, but the process of thinking about how to include these features leads to focusing on your codebase which leads to refactoring, better code. With time the code becomes so flexible that adding features is easy, because your foundation is superior. And in the process other core functionality gets fixed and becomes better.
C# is pretty good about these, with extension methods and event handlers. With event handlers instead of virtual methods, it's much easier to separate the pieces.
I hope I won't offend anyone pointing at it [1]. This is a somewhat popular tool to evaluate macroeconomic policies in the EU.
A combination of language choice (C# as a natural language Microsoft Excel bosses tend to request from their interns to use), the usual churn of academic undergrads and loads of other cultural failures are the reasons this monster exsts.
Someone should write a book how to make the worst ever codebase, and start with EUROMOD.
I could write about creating the worst possible environment to be a software developer, having worked at the JRC for five years.
I'm not sure how constructive that would be. I'm still hurting because the IT department decided the only way to deploy my Java app was through rsyncing to a running Tomcat installation, allowing class files from several deployments previous to resurface in memory causing some beautiful bugs.
Or the time they decided to buy a Hadoop cluster at a cost of EUR 100k which I told IT dept they wouldn't be able to connect to from the outside world because the network rules are carved in stone. They bought it, and guess what, network ops said no.
The ten foot high touch screen and the car emissions data stored in Excel files and the 80 million euros spent on a website or the time the partner research group refused to release the data we had funded so we couldn't run workshops or release the project (around EUR 2 million).
You can delete while resync'ing but I guess the issue is not in resyncing itself, but rather in the disempowerment of individual contributors.
You could have argued to add --delete for your case, as well as requesting a shutdown before and a start after, but I guess explaining this to countless morons is too much to ask from a humble developer.
OTOH, this resyncing story probably means that you were allowed to choose the wrong development framework to start with. Because resyncing PHP is much more reasonable.
No the issue was files cached in memory. No amount of deleting from the file system is going to delete files cached by the servlet, which is why the servlet itself needs to be restarted.
My favorite saying: “simple is robust”
Similar in spirit to Lehman’s Law of Continuing Change[0], the idea is that the less complexity a system has, the easier it is to change.
Rather than plan for the future with extensible code, plan for the future with straightforward code.
E.g. only abstract when the situation requires it, encourage simple duplication, use monoliths up front, scale vertically before horizontally, etc.
I’ve built many 0-1 systems, and this is the common thread among all of them.
[0] https://en.m.wikipedia.org/wiki/Lehman%27s_laws_of_software_...
Sure, but when applying "simple is robust" principle it is extremely important to understand also intrinsic complexity. Not handling edge-cases etc does not make for robust code, no matter how much simpler it is.
This is where the advice in the article is excellent.
If you start with code that's easy to delete, it's often possible to alter your data representation or otherwise transform the problem in a way that simply eliminates the edge cases. With the result being a design that is simpler by virtue of being more robust.
If you start with code that's hard to delete, usually by the time you discover your edge and corner cases it's already too late and you're stuck solving the problem by adding epicycles.
Yes, but I definitely also see the opposite quite a bit: Somebody several layers down thought that something was an edge case, resolved it in a strange way, and now you have a chain of stuff above it dealing with the edge case because the bottom layer took a wrong turn.
The most common examples are empty collections: either disallowing them even though it would be possible to handle them, or making a strange choice like using vacuous falsity, i.e.
(Just for illustration what I mean by "vacuous falsity", Python's all correctly returns True).Now, every layer above has to special-case these as well, even if they would be a completely normal case otherwise.
Your example perfectly illustrates oversimplification: attempt to stuff categorical variable into another of lower order. If a language has absence of value available as an expressible concept (nullability), then a list is at least 3-way categorical variable: absence of value, empty list, non-empty list. Any attempts to stuff that into a binary truthy value will eventually leak one way or another.
Failing to account for this gives you Wayland (which at this time is more complex than X11)
Is it actually more complex?
I find it more understandable, it’s just that DEs need to write their own compositors.
X11 has plenty of warts, but Wayland has more.
Example: screenshot. X11: "please tell me the pixels in the root window". Wayland: "please tell me the extension number of the portals extension so I can open a portal to pipewire so I can get the pipewire connection string so I can connect to the pipewire server so I can ..."
Example: get window position on screen.
Example: set window title.
X11 is a protocol about managing windows on a screen. Wayland is a protocol about sending pixel buffers to an unspecified destination. All the screen-related stuff, which is integral to X11, is hard to do in Wayland with a pile of optional extensions and external protocol servers which do not interact.
X11 is also more standardized, de facto, because there are fewer server implementations, which is not just an accident but is by design.
X11 is far more inclined towards the idea of clean separation of policy and mechanism, which I think is becoming more and more evidently correct across the board of programming. When you start talking about libraries and layers, a policy/mechanism split is part of how to write layered code correctly:
base mechanisms that interpret the raw problem correctly (e.g. pixels on a screen, mouse position) -> some policy that is in some ways mechanism with slightly more abstraction (e.g. drawing shapes) -> some more policy-mechanism abstraction (e.g. windows) ...
until you get to your desired layer of abstraction to work at. This goes along with modularity, composition, code reuse. X11 itself has many design flaws, but Wayland's design is untenable.
X11's separation of policy and mechanism was a mistake. Maybe it made sense at the time - I don't know. GUIs were new at the time. Now that we know how they're supposed to work, the flag should really be called "I am a window manager" rather than "root window substructure redirect", and "I am a special window" (e.g. combobox drop-down) rather than "ignore substructure redirect" for example. (Even better, define some kind of window class flag so the window manager CAN see it and knows it's a combo box drop-down).
> and "I am a special window" (e.g. combobox drop-down) rather than "ignore substructure redirect" for example. (Even better, define some kind of window class flag so the window manager CAN see it and knows it's a combo box drop-down).
I think X11 has had that for a very long time. In the late 2000s when Beryl was still separate from Compiz, it was almost trivial to target things like dropdowns by a descriptive name and give them different effects than regular windows. Mine had an accordion effect while windows would burn up.
My point is that X is in the right direction more than Wayland is, in the spirit of its design, and major pain points of X are largely due to its specific design/implementation. Perhaps an outgrowth of having a lot of tiny policy-mechanism components is lack of standardization, which did strike X, but I think that's an orthogonal concern and not better served by overly large, inflexible components.
There will always be edge cases, and yes they will make the code more complicated, but what really helps is automatic testing to make sure those edge cases don't break when making changes.
Setting up automatic testing alone tends to add its own layer of complexity. At least it's worth it.
It doesn't have to be difficult, for example when developing user interfaces I have a secret key combo for triggering the latest test, and another for running all tests. And I make mock functions that will trigger user interaction automatically. I inline the tests so they are next to the code being tested. I also place them inside comments so I can regexp remove the tests for the release because I don't want my program to be more then two MB, but if you don't care about size you could just leave the tests there so that they can be triggered by users in a prod environment as well. The problem with modern development is that the frameworks makes everything more complicated. Just ditch the leaky abstractions and increase your productivity 100x
> encourage simple duplication
A rule I like to follow:
- first time: write it
- second time: copy it
- third time: maybe refactor it
All such rules seem designed for a person not engaging their brain.
Is this "the same" thing? If so - extract and reference. Or is it "a different" thing which is superficially similar? Then don't.
Knowing when two things are one thing or one thing is two things is most of our job, right?
DRY is a terrible, terrible, principle because it’s correct but requires programmers to make this decision. Which they won’t because DRY has thought them that all duplication is bad. The flip-side is what you’re saying, where there are simply things it wouldn’t make sense to duplicate. I’m a strong advocate against basically every Clean Code principle, really anything, which isn’t YAGNI. That doesn’t mean I think you should create datetime services every time you need them. It doesn’t mean I don’t think you should make a “base” audit mixin/abstract when you want to add “created_at”… to your data model in your API.
I think a better way to look at it than “third time - consider refactor” is to follow this article and ask “will this ever need to be extended?”. If the answer is yes, then you should duplicate it.
This way you won’t get a flying dog in your OOP hellscape but you also won’t have to change your holiday service 9 million places when your shitty government decides to remove one of them (thanks Denmark). Between the two, I would personally prefer working on the one where I have to do the 9 million changes, but I would obviously prefer neither.
> Knowing when two things are one thing or one thing is two things is most of our job, right?
Yes, but often we don't know the domain enough but "this feature must be available yesterday". So add tests, copy, release. And when you have to either do it again or have to update this code and its original you should know more and be able to refactor and give good names to everything.
Everything in balance. While I agree with this philosophy, I've also seen lots of duplicate bugs because it wasn't realized there was two copies of the same bug.
Agreed! I'll usually go one step further for early projects and lean towards 3rd time copy, 4th time refactor.
Example: So much early code is boilerplate CRUD, that it's tempting to abstract it. 9 times out of 10, you'll create a quasi-ORM that starts inheriting business logic and quickly grows omni-functions.
Eventually you may actually need this layer, assuming you're system miraculously scales to needing multiple services, datastores, and regions.
However this doesn't just apply to the obvious, and you may find omni-logic that made a feature more simple once and is currently blocking N new features.
Code is cheap, especially today. Complexity necessarily constrains, for better or worse.
Hence why I am rather looking if two pieces of code change together, opposed to just looking the same.
If I need to introduce the same feature in multiple places in roughly the same way, that's a decent indication code wants to be the same and wants to change together. That's something to consider extracting.
Fixing the same bug in several places is a similar, but weaker indication. It's weaker, because a bug might also occur from using a framework or a library wrong and you do that in several places. Fixing the same business logic error in several places could mean to centralize some things.
It’s so easy to accidentally write an ORM or a database. I constantly stop and think; is this piece of code secretly a database?
change it, fix it, upgrade it.
+1, but I'm not sure if the "simple is robust" saying is straightforward enough? It opens up to discussion about what "simple" means and how it applies to the system (which apparently is a complex enough question to warrant the attention of the brilliant Rich Hickey).
Maybe "dumb is robust" or "straightforward is robust" capture the sentiment better?
Copy/paste is robust?
As a biomedical engineer who primarily writes software, it’s fun to consider analogies with evolution.
Copy/pasting and tweaking boilerplate is like protein-coding DNA that was copied and mutated in our evolutionary history.
Dealing with messy edge cases at a higher level is like alternative splicing of mRNA.
The usual metric is complexity, but that can be hard to measure in every instance.
Used within a team setting, what is simple is entirely subjective to that set of experiences.
Example: Redis is dead simple, but it's also an additional service. Depending on the team, the problem, and the scale, it might be best to use your existing RDBMS. A different set of circumstances may make Redis the best choice.
Note: I love "dumb is robust," as it ties simplicity and straightforwardness together, but I'm concerned it may carry an unnecessarily negative connotation to both the problems and the team.
Simple isn't necessarily dumb.
Dull?
Indeed, simple is not a good word to qualify something technical. I have a colleague and if he comes up with something new and simple it usually takes me down a rabbit hole of mind bending and head shaking. A matter of personal perspective?
Is my code simple if all it does is call one function (that's 50k lines long) hidden away in a dependency?
You can keep twisting this question until you realize that without the behemoths of complexity that are modern operating systems (let alone CPUs), we wouldn't be able to afford the privilege to write "simple" code. And that no code is ever "simple", and if it is it just means that you're sitting on an adequate abstraction layer.
So we're back at square one. Abstraction is how you simplify things. Programming languages themselves are abstractions. Everything in this discipline is an abstraction over binary logic. If you end up with a mess of spaghetti, you simply chose the wrong abstractions, which led to counter-productive usage patterns.
My goal as someone who writes library code is to produce a framework that's simple to use for the end user (another developer). That means I'm hiding TONS of complexity within the walls of the infrastructure. But the result is simple-looking code.
Think about DI in C#, it's all done via reflection. Is that simple? It depends on who you ask, is it the user or the library maintainer who needs to parametrize an untyped generic with 5 different type arguments?
Obviously, when all one does is write business logic, these considerations fall short. There's no point in writing elegant, modular, simple code if there's no one downstream to use it. Might as well just focus on ease of readability and maintainability at that point, while you wait for the project to become legacy and die. But that's just one particular case where you're essentially an end user from the perspective of everyone who wrote the code you're depending on.
Can’t upvote enough. Too much dogshit in software is caused by solving imaginary problems. Just write the damn code to do the thing. Stop making up imaginary scaling problems. Stop coming up with clever abstractions to show how smart you are. Write the code as a monolith. Put it on a VM. You are ready to go to production. Then when you have problems, you can start to solve them, hopefully once you are cash positive.
Why is your “AirBnb for dogs” startup with zero users worrying about C100K? Did AWS convince you to pay for serverless shit because they have your interests in mind, or to extract money from you?
I am not sure on that. But I am certain the article Amazon published on cutting AWS bill by 90% by simplifying juvenile microservices to a dead simple monolith was deleted on accident.
You can't wish the complexity of business logic away. If it is vast and interconnected, then so is the code.
Related:
Write code that is easy to delete, not easy to extend (2016) - https://news.ycombinator.com/item?id=24989351 - Nov 2020 (30 comments)
Write code that is easy to delete, not easy to extend (2016) - https://news.ycombinator.com/item?id=23914486 - July 2020 (109 comments)
Write code that is easy to delete, not easy to extend - https://news.ycombinator.com/item?id=18761739 - Dec 2018 (2 comments)
Write code that is easy to delete, not easy to extend - https://news.ycombinator.com/item?id=11093733 - Feb 2016 (133 comments)
Yep, to recycle a brief analysis of my own youthful mistakes:
____
I've come to believe the opposite, promoting it as "Design for Deletion."
I used to think I could make a wonderful work of art which everyone will appreciate for the ages, crafted so that every contingency is planned for, every need met... But nobody predicts future needs that well. Someday whatever I make is going to be That Stupid Thing to somebody, and they're going to be justified demolishing the whole mess, no matter how proud I may feel about it now.
So instead, put effort into making it easy to remove. This often ends up reducing coupling, but--crucially--it's not the same as some enthusiastic young developer trying to decouple all the things through a meta-configurable framework. Sometimes a tight coupling is better when it's easier to reason about. [...]
https://news.ycombinator.com/item?id=41219130
> So instead, put effort into making it easy to remove.
You might, but there's also going to be other people that will happily go ahead and create abstractions and logic that will form the very core of a project and entrench themselves to such a degree that they're impossible to get rid of.
For example, you might stumble upon CommonExcelFileParser, CommonExcelFileParserUtilities, HasExcelParseStatus, ProductImportExcelParser, ProductImportExcelParserView, ProductImportExcelParserResultHandler and who knows what else, the kind of stuff that ends up being foundational for the code around it, much like how if you start a front end project in React or Angular, migrating to anything else would be a Sisyphean task.
In practice, that means that people end up building a whole platform and you basically have to stick with it, even though some of the choices made might cause bunches of problems in the future and, due to all of the coupling, refactoring is way harder than it would be in an under-abstracted codebase.
I'm not sure what to do then. People seem to like doing that more than applying KISS and YAGNI and making code easy to delete.
Not my originals, and I cannot recall who said this... But it's completely on point
* Software has a tendency to become maximally complex. You either have an actually complex domain, or the developers will find a way to increase the complexity (..because otherwise, they're bored)
* Good software is modular and easy to remove. Consequently, good software will keep getting replaced until it's bad and cannot be removed anymore
Hard to remove doesn't mean impossible to remove.
Refactoring or fixing bad codebases is a thing.
Yeah, it was probably "won't be removed anymore" or similar. As I said, I don't remember who said it and was kinda paraphrasing
Dealing with precisely this right now. Written by a consultant who I, maybe uncharitably, suspect is trying to ensure his job security. At this point, it is harder to even understand what's going on behind layers of handlers, factories and handler factories, forget about removing things. It works though and so no one wants to stick their neck out and call it out for the fear of being labelled "not smart".
It still depends. Business line application yes and 10x yes. It will change it will move, don’t try to foresee business requirements. Just write something that will be easy to replace or throw away.
Frameworks and libraries not really, for those you still have to adjust to whatever happens in the world but at much saner pace.
Biggest issue is when devs want to write “framework” when they work on business line application where they have frameworks that they are already using like Rails/Asp.Net etc.
I would say the biggest issue are the frameworks themselves: they practically force you to fit your code to their architecture, and before you know it, your logic is split across innumerable classes. Laravel (with which I have the most experience) has models, controllers, views, service providers, data transfer objects etc. etc. - that makes it (arguably) easier to write and extend code, but very hard to refactor/delete.
> Business line application yes and 10x yes. It will change it will move, don’t try to foresee business requirements. Just write something that will be easy to replace or throw away.
This is correct, but from my experience of working in the same company for over a decade: You'll learn to foresee requirements. Especially the "we'll never need that" ones that become business critical after a few months/years...
Like the path that starts with a "simple" system of "soft deletes" for Foo records, which progresses through a period of developer-assisted "restores" or merges, and then they want even older info, and to make reports...
However it would have all been so much easier if they'd realized their business domain called for "Foo Revisions" in the first place.
Sometimes things change, sometimes we chose the wrong abstraction.
Unless you’re writing the Linux kernel you shouldn’t write it like the Linux kernel.
Pretty wild that none of this talks about testing or observability. Tests are also something that you need to pay to maintain, but they give the ability of reducing the risk that you broke something when you removed it. Additionally when you've exposed your service to potential external callers you need to both have a robust way of marking some calls as deprecated, to be deleted as well as observing whether they are still being called and by what.
I recently did our first semi-automated removal of exposed graphql resolvers, metrics about how often a given resolver was already available so parsing that yielded the set of resolvers I *couldn't* delete. Graphql already has a deprecated annotation, but our service didn't handle that annotation in any special way. I added observability to flag if any deprecated functions have been called & then let that run for sufficiently long in prod, then you can safely delete externally exposed code.
This is going to be a bit of an oversimplification but when you build things that are easy to delete, then you’re not going to cause unintentional bugs when you delete them. It’s once you over complicate things that everything becomes an interconnected mess where developers don’t know what sort of impact changes will have. There are a lot of ways to fuck this up of course. Maybe you’re following some silly “best practice” principle, maybe you’re doing “micro-services” in a manner where you don’t actually know who/what consumes which service. But then you’ve not build things that are easy to delete.
I think external consumption as you frame it is a good example of this. It’s fair to give consumers a reasonable warning about the depreciation of a service, but if you can’t actually shut it off when you want to, then you’ve not designed your system to let things be easily deleted.
Which is fair. If that works for you, then so things that way. I suspect it may not work too well if you’re relying on tests and observations to tell you if things are breaking. Not that I have anything against tests, but it’s not exactly a great safe-guard if you have to let them tell you if you broke something in a long complicated chain. Not least because you’re extremely unlikely to have test-coverage which will actually protect you.
If you write so many lines of code then you can expect some other number of lines of tests. If you delete some of the code, you may be able to delete some of the tests. The point is that you can talk about just the code like TFA does and assume related impact on tests. TFA not saying anything about tests does not let us assume that TFA means that one should not write tests.
Tests are great, but there’s more to programming than writing tests. People don’t have to mention tests in every article.
I agree, I am not a proponent of TDD or anything, but cleaning & restructuring large code bases without tests is a recipe for an outage/regression
Reading this:
My experience is that the title doesn't hold. Code that is easy to delete is -- more often than not -- also easy to extend because it is layered, modular, and isolates different pieces through abstractions like interfaces or other type contracts.I've been telling my computational physics students that the best computation is the one they don't need to bother with.
Personally, I split code into two parts. The business logic and actually implementation. The business logic may be duplicated due to its nature, but it should not have too many duplicated technical details in it. While the business logic can be as shitty as you want as long as you do not handle business logic directly in it and keep it application independent. In that way. If you know things messed up and don't go too well. You have the option to wipe the implementation as a whole instead of forced to fix it and try to find out the actual spec from implementation.
Glaring mistake in the first paragraph:
> The problem with code re-use is that it gets in the way of changing your mind later on.
This is simply incorrect, especially in the generality in which it is stated. If you change your mind and the code was copy-pasted to ten places, then you have to change ten places. On the other hand, if the code is in a function, then you only need to change it once. And if you do find that one of the ten invocations should not be changed, then you can still copy-paste - or make the function more general.
Like crossing a street without looking, copy-pasting is almost always a bad idea.
In my experience, bad copy pasted code results in an annoying afternoon of tech debt repayment and fixes. Badly abstracted code results in months of tech debt repayment.
Of course, the answer is “don’t make bad abstractions”, but we all know how that one goes with a team and changing product reqs.
If only that were the case on a project at work. The badly copy pasted code has diverged over the years so you have 10 different versions of the same looking code that individually have differing edge cases, half of them by mistake because they forgot about the other 9.
I would trade that for one piece of mediocre abstracted code any day.
Oh yeah and everything in the codebase is copy and pasted.
Many times the code is reused in places where it is the correct code, so then you when you change it you have to slow down and split those places up. We have a git submodule of common UI widgets, changing one of those is impossible now, easier to copy the component into the project and change it locally. It's a problem! The "shared code" needs to be as minimal as possible because the sharing makes it harder to change.
> If you change your mind and the code was copy-pasted to ten places
The author would probably argue that you should have moved that code to a module / function.
Superficially, they contradict themselves on the topic. When read slowly, they use copy-paste as a way to indicate what code should be abstracted, and what really is a pattern to follow.
> On the other hand, if the code is in a function, then you only need to change it once. And if you do find that one of the ten invocations should not be changed, then you can still copy-paste - or make the function more general.
Ah yes, but what happens if you have to change 3 of the function invocations in one way, 5 in another, and the other two need to be completely rewritten because those aren't even using the same abstraction any more?
If it's all in one function, most developers will try to change that function to make all 10 cases work, when it should never have been one function in the first place.
It is much much easier to fix ten copy-paste places than to untangle a knot that should never have been tied, once it's holding pieces of your system together.
There is no one size fits all.
In a many cases I'd still rather have three or more versions of a function, many which may just be very thin shims to accommodate that scenario than 10 copy/pastes of variations. Or shim at the call site and keep one function if that suits.
If a function does different things in different circumstances it should usually be split into different functions.
Languages like Erlang which can have different versions of a function, selected by pattern matching (with optional guards) make this convenient:
This is such a strange argument. You want to copy and paste code 10 times rather than making a function, because if the requirements change and if the person assigned to fix it is a moron, then it might prevent the moron from choosing one specific way of making a mess?
You can't prevent future morons from doing moronic stuff in the future. They'll just find another moronic thing to do.
There's a great corollary here that bad code sticks around, because it's much harder to remove
it's crazy how we keep going through all those injunctions (religions) about software, they all look amazing on paper, feel like common sense and yet 50 years in, software is garbage 90% of the time
yet, we keep bringing this stuff up like it's some sort of genius insight / silver bullet
I think it's because 90% of the garbage is being written by people that don't read or write articles like this one.
I don't think it's the case, because all those schools of thought (your DRY, your SOLID, your DDD, etc) all have opposite schools of thought rife with other similarly popular mantras
the problems in engineering rarely stem from the lack of principles and have way more to do with mismanaged projects, arbitrary deadlines, shifting priorities, unreliable sources of data, misunderstood business logic and all those fancy acronyms, all the SCRUM and agile in the world will never make up for all that
That's really not been my experience when reviewing code. Bad code I've seen has been due to misusing language features, not knowing the principles in these articles, or misunderstanding the principles or blanket applying them to everything.
For example, abstracting every piece of similar code to make it "DRY" because they don't understand that it's about concepts not code.
Is this also advocating to use software as vanilla as possible and not go too deep in customisation?
And have a dozen versions of the same logic leading to subtle bugs in production.
At the risk of turning a unison into a chord, here's my two cents.
If:
1. You know where the 'creases' of orthogonality are. You've carved the turkey 1000 times and you never get it wrong anymore.
2. As a result, there is hardly any difference in complexity between code that is and isn't easy to extend.
Then write code that is easy to extend, not delete.
The question is whether your impression of the above is true. It won't be for most junior developers, and for many senior ones. If orthogonality isn't something you preoccupy yourself with, it probably won't be.
In my experience, the most telling heuristic is rewriting propensity. I'm talking about rewriting while writing, not about refactoring later. Unless something is obvious, you won't get the right design on the first write. You certainly won't get the correct extensible design. If you're instructed to write it just once, then by all means make it easy to delete.
> The question is whether your impression of the above is true
If you think you are good enough to qualify you almost certainly don't qualify. If you do qualify then chances are you probably don't think you do.
Could you give an example of your point? Isn't writing orthogonal code the same as writing code that's easy to delete?
Here's an algebraic example to keep things theoretical. If the easy to delete version proposed by the article is:
The prospective extensible version is: It's the generalization for factorable polynomials. It's clearly harder to read than the easy to delete version. It's more complex to write, and so on.However, it's algebraically orthogonal. It has advantages in some cases, for instance if you later add code for a 6th-order polynomial and need to use its zeroes for something else.
We know that it could be better in some cases. Is it a good bet to predict that it will be better overall? The problem domain can fracture across a thousand orthogonal "creases" like this one. The relevant skill is in making the right bets.
Here's an example that's not orthogonal. Let's say we think the 6 coefficient might be more likely to change in the future:
This version is most likely just adding complexity. A single function is almost always a better bet.Once you can load up a full codebase into an LLM I'm hoping the cost to update client code is significantly reduced. Then you could focus on evolving the design without all the grunt work.
Doesn't look promising so far
I'm also betting on this, that one day I'll be able to dump a codebase into an LLM and it will clean up the code. Not rewrite it, not restructure it, just clean it up. Remove unused code and comment it sensibly. Maybe also suggest some tests for it and implement them separately.
Comments should be based on intention. If I, as the programmer, am writing a piece of code and feel like there's something that I need to communicate about my intention in writing this, then I should. But if it's just surface level analysis, comments are just noise most of the time.
I don't see why this would be useful.
Copilot already does this, at least for individual chunks of code (and text, for that matter). Not for a whole codebase, but I think that's going to be a matter of time.
I wonder if such an LLM will actually be cheaper than a graduate student.
Why not both?
Because building for extensibility adds real complexity for a hypotetical need
If you want the code to do something different later, change, replace or extend it then... when you actually know what it needs to do
I am not sure that is something that applies 100%, but I understand the concern.
It is my understanding that we should try to build solutions to current problems, and be open to future use cases that could involve small additions in functionality.
It would be stupid to design an unmodifiable system just because some parts can be deleted and we are not sure what future needs are. Code should always be easy to extend, in my opinion.
Conversations like this are always difficult to discuss at a high level because the way we implement the words we use can be very different. Code can be written in a way that a lot of complexity is added in order to make it extensible, or it can be written in a way where simplification is used to make it extensible. Both authors would agree that extensible is good.
That is an excellent and pragmatic point of view.
It's not an absolute and there are occasional good design decisions made in the name of extensibility
But what is an unmodifiable system? If it's code in your control, it can be changed, right?
If it is easy to understand, then it is easy to extend.
Write test, not code.
Specifically, write tests that identify disposable code. More specifically, you hopefully wrote some disposable code that is a modular extension of something close to the core. Write tests that demonstrate which of those deserves to be core, and which is necessary for a requirement but disposable. Since the article brings up shared apis, hopefully when you arrive on a new project, those are well understood as requirements paired with test cases. Repeat in the opposite direction in dependent projects.
All nice code looks like this: int main(){}
I worked with someone once that so adamantly followed every quick tip like this that they heard to such an extreme level that now they all make me feel sick.
Implementing choice is superior. Not only can your program be capable of more actions, but the process of thinking about how to include these features leads to focusing on your codebase which leads to refactoring, better code. With time the code becomes so flexible that adding features is easy, because your foundation is superior. And in the process other core functionality gets fixed and becomes better.
Can you explain what you mean with "implementing choice"?
This was written in the context of a discussion about showing resistance or not to feature requests by users sorry for the confusion.
thank you
C# is pretty good about these, with extension methods and event handlers. With event handlers instead of virtual methods, it's much easier to separate the pieces.
And yet the worst ever code I saw was in C#.
I hope I won't offend anyone pointing at it [1]. This is a somewhat popular tool to evaluate macroeconomic policies in the EU.
A combination of language choice (C# as a natural language Microsoft Excel bosses tend to request from their interns to use), the usual churn of academic undergrads and loads of other cultural failures are the reasons this monster exsts.
Someone should write a book how to make the worst ever codebase, and start with EUROMOD.
[1] https://github.com/ec-jrc/JRC-EUROMOD-software-source-code
I could write about creating the worst possible environment to be a software developer, having worked at the JRC for five years.
I'm not sure how constructive that would be. I'm still hurting because the IT department decided the only way to deploy my Java app was through rsyncing to a running Tomcat installation, allowing class files from several deployments previous to resurface in memory causing some beautiful bugs.
Or the time they decided to buy a Hadoop cluster at a cost of EUR 100k which I told IT dept they wouldn't be able to connect to from the outside world because the network rules are carved in stone. They bought it, and guess what, network ops said no.
The ten foot high touch screen and the car emissions data stored in Excel files and the 80 million euros spent on a website or the time the partner research group refused to release the data we had funded so we couldn't run workshops or release the project (around EUR 2 million).
The waste.
> rsyncing to a running Tomcat installation
You can delete while resync'ing but I guess the issue is not in resyncing itself, but rather in the disempowerment of individual contributors.
You could have argued to add --delete for your case, as well as requesting a shutdown before and a start after, but I guess explaining this to countless morons is too much to ask from a humble developer.
OTOH, this resyncing story probably means that you were allowed to choose the wrong development framework to start with. Because resyncing PHP is much more reasonable.
No the issue was files cached in memory. No amount of deleting from the file system is going to delete files cached by the servlet, which is why the servlet itself needs to be restarted.
We agree with you. The state of the codebase is very bad.
We are rewriting the codebase from scratch in Rust and Svelte.