We've been impacted by this. I migrated our services to Python 3.14 so we could attach profilers during runtime.
A couple of services looked like they had a memory leak. Memory was continuously increasing over time. Thanks to Python 3.14, we were able to use memray to understand what was going on. Those services were recreating HTTP clients (aiohttp) for every inbound request, and memory allocated by the downstream SSL lib was growing faster than it was being released.
We ended up rolling back to 3.13, which fixed the issue. I'll try again with 3.14.5.
If you are using "httpx", it's likely caused by a reference cycle. I made a PR to fix it but the maintainers haven't applied it. :-( https://github.com/encode/httpx/pull/3733
The reference cycle httpx creates is kind of a worst-case scenario for the incremental GC issue. Both the generational (3.13 and older) and incremental GC are triggered by the net new "container" objects (objects that have references to others, like lists and not like ints and floats). The short summary is that you need to create more container objects before the incremental GC triggers. In the case of the httpx reference cycle, you have a relatively small number of container objects hanging on to a lot of memory, due the SSL context data (which is a big memory hog).
Reverting back to the generational GC was the wise thing to do, even though it's a bit scary to do in a bugfix release. The incremental GC works for most people but in the minority of cases it doesn't, it uses quite a lot more memory. I'm pretty sure with some additional tuning, the incremental GC would be fine too but it just didn't get that tuning. The generational GC has literal decades of real-world use (Guido merged my patch on Jun 2000, Tim Peters did a bunch of tuning after that to optimize it).
Unfortunately, you may be the wrong gender to contribute to Encode repositories like httpx:
> I've closed off access to issues and discussions.
> I don't want to continue allowing an online environment with such an absurdly skewed gender representation. I find it intensely unwelcoming, and it's not reflective of the type of working environments I value.
We've been chasing down similar aiohttp client creation issues (liked to ...aiobotocore usage) for months now.
It's annoying that somehow talking to S3 etc requires so much churn. We have been trying to cache session objects and the like but clearly are still missing something.
Chasing this down has also made me realize how little Python libs use `weakref`, and just will build up so many circular references. The other day I figured out Django request's session infrastructure creates a circular reference meaning that requests have to get GC'd to get cleaned up in CPython.
I have a suspicion that the 3.14 problems are heavily linked to "real" workloads being almost entirely filled with cyclical objects.
On profilers - profiling will come in 3.15, are you referring to remote exec? It is a great feature I am very exited about, at the same time afraid that the company won’t allow ptrace capability in prod.
yes. remote exec allows me to attach profilers (e.g. memray) directly into a running process. i'm also excited about the upcoming statistical (cpu) profiler from 3.15
"Python 3.14 shipped with a new incremental garbage collector. However, we’ve had a number of reports of significant memory pressure in production environments.
We’ve decided to revert it in both 3.14 and 3.15, and go back to the generational GC from 3.13."
The main benefit of python to me is that while slow, it's predictable. I do think they're going to get a lot more resistance to adding JITs, moving GCs, etc. it will become java with a million knobs to tune. If people want a JIT'd python just use pypy, right?
Java lost almost all those knobs a while ago (I mean they're there, but you're better off relying on the defaults). The modern GCs have one or at most two knobs remaining, and even that will become unnecessary next year. As to predictablity, you get maximal pause time of well under 1ms for heaps up to 16TB.
> Compared to Python's, all of them are beyond perfect.
I somehow understand the situation less after reading this.
Is Python's GC bad, or are there cyclic reference issues? Is it possible to detect cyclic references perfectly? What does beyond perfect mean? If we have 7 and 0.1% of the time you need one of the 6 that is non-default, how do we choose? Is the understated version of "Compared to Python's, all of them are beyond perfect" "I think Java's are great"? If not, what about Python's impl makes it so lackluster to any of 7 of Java's?
> Is Python's GC bad, or are there cyclic reference issues?
Unless you're being pedantic and including reference counting without cycle detection as GC, if your GC has cyclic reference issues, your GC is bad.
> Is it possible to detect cyclic references perfectly?
Yes? That's the entire point of tracing GC. You have some set of root objects that you start with (globals, objects on thread stacks, etc.) and then you mark every object that's reachable from them. Anything that's not reachable is garbage, even if there are cycles within them.
Libraries. I use both languages, and a survey of what libraries are available is part of picking an implementation language when starting a greenfield project.
Well, they never made the jump to Python 3. But shipping 2.7 interpreters in 2024 was quite an achievement on its own. So their users already know this pain. And from my experience in academia, python 2.7 and java 8 will probably be used for another 20 years before the last machine running that stuff burns out.
I'm currently in a .NET shop so not an issue for me, makes me wonder if Python will eventually adopt the concept of LTS releases, this could have been avoided as an issue if it was part of a non-LTS release.
If all releases are LTS, then none are. Part of the point GP was making is that when some releases have a very short maintenance window, then changes that are terrible in them don't need to be reverted (since the maintenance window will close soon anyways).
Yeah it seems like a miss. I guess the thinking was that it wasn't developer-facing and just an internal optimization. But of course any change to garbage collection will change the memory and cpu dynamics of the process in a material way.
.NET seems to have regularly changed the garbage collector over the years and I do not remember any similar surprises in production. I wonder why they have had better experience?
I thought that by now dynamic garbage collection was a known quantity so that making changes, outside of out right bugs, is fairly safe and predictable?
One thing Microsoft does really well is eating its own dogfood and Microsoft feeds a ton of .Net dogs.
So any change to GC starts with massive .Net MSFT code base so they get extremely good telemetry back about any downsides and might be able to fix it in time.
There is almost no dog fooding on Windows development since version 8, Typescript team rather rewrite the compiler in Go, Azure has plenty of Go, Rust and Java projects alongside .NET.
Oh, they really don't dogfood Windows development any longer, regardless of the incentives.
I have my WinRT 8, UAP 8.1, UWP 10, Project Reunion, .NET Native, C++/CX, C++/WinRT, XAML Islands, XAML Direct, WinUI 2.0, WinUi 3.0, WinAppSDK and what not scars to prove how they aren't dog fooding any piece of it in any meaningful manner.
Heck they keep talking about C++ support in WinUI 3, as if the team hasn't left the project and is now playing with Rust instead.
They managed that plenty of early WinRT advocates became their hardest critics, while not believing anything else they put out, like now this Windows K2 project.
I like my programming language flame wars just as much as the next guy but Go is a really easy language to get started with, while also being very fast. It's not just luck
> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
What? If you are talking web development, .Net is just about the same as Go. It's 100% Java OOP type writing but result is same, very performant API server.
Sure, Rust is completely different beast with different target system.
Actually there’s a change to dotnet 9 with how it handles the heap and GC which caused major issues for us.
I’ll confess the reason it hit us so hard is because the code quality was so low and wasteful on allocations that it didn’t hide the problem as well as previous versions.
I remember working on the Windows Update back-end at Microsoft around 2005, and we had a problem where it would freeze up periodically, and not surprisingly that turned out to be caused by GC. But we noticed it before shipping, and we just tweaked some GC parameters.
So I think it was not a big problem for .Net because it gave you enough control over GC, and because people tested their code before putting it in production.
All these issues were known in previous attempts for removing the GIL. But if Instagram/Meta want it, everyone stands to attention and finds out the obvious problems years later. Kind of like in geopolitics.
I hope Meta switches Instagram to PHP/Hack so they leave Python alone.
In the world of AI written code, Python just doesn’t make sense. Converted about 100k lines in the last few months to golang and the performance is life changing. Curious if we will see global Python adoption fall by 75% or more in the next few years.
With a similar amount of experience with both languages I found Go much easier to read. I've always been a bit miffed why Python is seen as easy to read for experienced developers. I get the syntax is good for short code or people with little experience but my experience is those readability benefits went away quickly with time or complexity.
Why are you miffed about it? I legitimately hate reading golang with passion and find python to be pretty intuitive, outside of the odd ambitious list comprehensions. I worked in a golang shop for several years, so it's not just an familiarity situation either.
We are just different. That's not something to be mad about.
In my opinion most interpreted languages today tend to produce very dense code. Fancy call chains and closures interleaving. If you look for a subtle bug those are hard to reason about, you have to know the details of a lot of different APIs.
Go is verbose partly for that reason, but a silly loop is a silly loop. The constraints are clear, you only have to do the logic.
Python is a garbage language. Dynamic types are a disaster for maintaining large codebases and we waste enormous amounts of compute running large systems with it.
No we should write one of the many modern programming languages that handle certain projects way better, including kotlin, go, or Java. The only things python is best in class at are scripting and as a harness for high performance c++ or fortran.
Any language that uses error codes instead of exceptions is a non-starter for me. Produces code that craps all over the happy path.
Python has a different problem: it is slow as f---. I did a micro benchmark comparison against 5 other languages in preparation for my python replacement language. Outside of dictionary lookups, it is 50-600 times slower than C depending on the workload.
Go, Rust etc are fine. They land at 1.25-3x slower than C. But I prefer the readability of python minus its dynamic nature.
I think we'll eventually be generating machine code directly. But until then we should be using code that our team can actually read and understand. If you know go, then that works you, Not everyone does.
Doubt it. LLMs will always be more expensive per-token than compilers, and high level languages need fewer tokens than machine code. Also, type systems, warnings, overlap with natural language in names - those are very useful.
nothing about the performance characteristics of python changed with AI so why would you use python over golang if performance is a requirement/bottleneck? Trying to understand the reasoning as to me golang and python are equally simple to write and understand.
Regardless of whether golang and python are actually equally simple, python certainly has the reputation of being easier to write and read than almost any other language. That is a big part of its popularity.
Python is not really simple though, the semantics are actually quite bonkers. It just has "simple"-looking syntax, but that only helps you for trivial programs where the bonkers semantics does not get in the way.
For personal projects, yes. For code going into production, you still need human code review, and that has to happen in a language that the humans you've hired are comfortable with. One day, we'll all be YOLOing vibe code straight into production, but that day is not today.
We've been impacted by this. I migrated our services to Python 3.14 so we could attach profilers during runtime.
A couple of services looked like they had a memory leak. Memory was continuously increasing over time. Thanks to Python 3.14, we were able to use memray to understand what was going on. Those services were recreating HTTP clients (aiohttp) for every inbound request, and memory allocated by the downstream SSL lib was growing faster than it was being released.
We ended up rolling back to 3.13, which fixed the issue. I'll try again with 3.14.5.
If you are using "httpx", it's likely caused by a reference cycle. I made a PR to fix it but the maintainers haven't applied it. :-( https://github.com/encode/httpx/pull/3733
The reference cycle httpx creates is kind of a worst-case scenario for the incremental GC issue. Both the generational (3.13 and older) and incremental GC are triggered by the net new "container" objects (objects that have references to others, like lists and not like ints and floats). The short summary is that you need to create more container objects before the incremental GC triggers. In the case of the httpx reference cycle, you have a relatively small number of container objects hanging on to a lot of memory, due the SSL context data (which is a big memory hog).
Reverting back to the generational GC was the wise thing to do, even though it's a bit scary to do in a bugfix release. The incremental GC works for most people but in the minority of cases it doesn't, it uses quite a lot more memory. I'm pretty sure with some additional tuning, the incremental GC would be fine too but it just didn't get that tuning. The generational GC has literal decades of real-world use (Guido merged my patch on Jun 2000, Tim Peters did a bunch of tuning after that to optimize it).
> I made a PR to fix it but the maintainers haven't applied it. :-( https://github.com/encode/httpx/pull/3733
Unfortunately, you may be the wrong gender to contribute to Encode repositories like httpx:
> I've closed off access to issues and discussions.
> I don't want to continue allowing an online environment with such an absurdly skewed gender representation. I find it intensely unwelcoming, and it's not reflective of the type of working environments I value.
— https://github.com/encode/httpx/discussions/3784
Discussed on Hacker News here: https://news.ycombinator.com/item?id=47193563
A fork discussed here: https://news.ycombinator.com/item?id=47514603
It was httpx indeed. i had aiohttp in mind because we ended up replacing that particular client with it
We've been chasing down similar aiohttp client creation issues (liked to ...aiobotocore usage) for months now.
It's annoying that somehow talking to S3 etc requires so much churn. We have been trying to cache session objects and the like but clearly are still missing something.
Chasing this down has also made me realize how little Python libs use `weakref`, and just will build up so many circular references. The other day I figured out Django request's session infrastructure creates a circular reference meaning that requests have to get GC'd to get cleaned up in CPython.
I have a suspicion that the 3.14 problems are heavily linked to "real" workloads being almost entirely filled with cyclical objects.
On profilers - profiling will come in 3.15, are you referring to remote exec? It is a great feature I am very exited about, at the same time afraid that the company won’t allow ptrace capability in prod.
yes. remote exec allows me to attach profilers (e.g. memray) directly into a running process. i'm also excited about the upcoming statistical (cpu) profiler from 3.15
"Python 3.14 shipped with a new incremental garbage collector. However, we’ve had a number of reports of significant memory pressure in production environments.
We’ve decided to revert it in both 3.14 and 3.15, and go back to the generational GC from 3.13."
Sounds the right move for me
The main benefit of python to me is that while slow, it's predictable. I do think they're going to get a lot more resistance to adding JITs, moving GCs, etc. it will become java with a million knobs to tune. If people want a JIT'd python just use pypy, right?
Java lost almost all those knobs a while ago (I mean they're there, but you're better off relying on the defaults). The modern GCs have one or at most two knobs remaining, and even that will become unnecessary next year. As to predictablity, you get maximal pause time of well under 1ms for heaps up to 16TB.
The max pause time thing is a meme :) I have gotten multi second pause times with ZGC. It depends on what hardware you run it on.
The new generational ZGC? I'm sceptical.
Have a reproducer?
Next year? Do tell
https://openjdk.org/jeps/8377305
As far as I know, java has 7 GC implementations, none of which are perfect, all of which have drawbacks
Lately, they seems to work with CRIU, various heuristics, multi-stage in-process bytecode compilation ..
Java is a mess, they are working hard to avoid fixing their issue (that nobody else have, so fixes are available)
>As far as I know, java has 7 GC implementations, none of which are perfect, all of which have drawbacks
Compared to Python's, all of them are beyond perfect. And 99.9% of the time you don't even need to use anything but the default.
> Compared to Python's, all of them are beyond perfect.
I somehow understand the situation less after reading this.
Is Python's GC bad, or are there cyclic reference issues? Is it possible to detect cyclic references perfectly? What does beyond perfect mean? If we have 7 and 0.1% of the time you need one of the 6 that is non-default, how do we choose? Is the understated version of "Compared to Python's, all of them are beyond perfect" "I think Java's are great"? If not, what about Python's impl makes it so lackluster to any of 7 of Java's?
> Is Python's GC bad, or are there cyclic reference issues?
Unless you're being pedantic and including reference counting without cycle detection as GC, if your GC has cyclic reference issues, your GC is bad.
> Is it possible to detect cyclic references perfectly?
Yes? That's the entire point of tracing GC. You have some set of root objects that you start with (globals, objects on thread stacks, etc.) and then you mark every object that's reachable from them. Anything that's not reachable is garbage, even if there are cycles within them.
As Python using SRE and supporting Python Flask apps, most of us would love JIT in Python assuming it pretty much drop in replacement.
PyPy doesn't have the support it needs and is stuck on 3.11.
It is the same for me. Predicability is better than any optimization.
Why not just use Go? It has a proper concurrent, non-moving GC that, AIUI, has not been associated with sudden memory spikes.
For a new project, teams can decide whether to use Go, but there are many millions of lines of existing Python servers out there.
Not to mention that there are differences in ecosystem, familiarity, and ergonomics that may make a team want to stick with Python.
“Just use Go” is not really actionable advice in most cases.
Libraries. I use both languages, and a survey of what libraries are available is part of picking an implementation language when starting a greenfield project.
It's a tradeoff. Go programs are extremely slow at starting up for example.
And if people want python with java, there's always Jython.
Graal vm has support for python 3 unfortunately it’s funded by oracle.
If it makes you feel any better (it probably doesn't), the development of OpenJDK and the Java language itself is also mostly funded by Oracle
Java is funded by Oracle, all of it.
People parrot to use OpenJDK without understanding it is mostly Oracle employees working on it.
And if you dislike Oracle, the other minor contributors are Red-Hat, IBM, SAP, Microsoft, Alibaba, Azul,... which for many HNers are the same.
jython has been basically unmaintained for quite some time
Well, they never made the jump to Python 3. But shipping 2.7 interpreters in 2024 was quite an achievement on its own. So their users already know this pain. And from my experience in academia, python 2.7 and java 8 will probably be used for another 20 years before the last machine running that stuff burns out.
Jython is unmaintained, I'd recommend Clojure. Use python libraries and code while seamlessly targeting the JVM.
jpype and graalpy are life.
jython went EOL.with python 2 going EOL.
Resistance from anyone who matters to the developers?
I'm genuinely surprised that python change was even possible without PEP
Makes ya miss having a BDFL. Dang I didn't realize he's 70 now.
https://en.wikipedia.org/wiki/Guido_van_Rossum
I wouldn’t recommend running the latest Python in prod. Honestly 3.x.7 releases are the most mature .
I'm currently in a .NET shop so not an issue for me, makes me wonder if Python will eventually adopt the concept of LTS releases, this could have been avoided as an issue if it was part of a non-LTS release.
All Python versions are LTS if you consider 5-year a good measure.
https://devguide.python.org/versions/
If all releases are LTS, then none are. Part of the point GP was making is that when some releases have a very short maintenance window, then changes that are terrible in them don't need to be reverted (since the maintenance window will close soon anyways).
Yeah it seems like a miss. I guess the thinking was that it wasn't developer-facing and just an internal optimization. But of course any change to garbage collection will change the memory and cpu dynamics of the process in a material way.
It's not a change to the language, it's a change to the cpython runtime
PEPs aren't necessarily just for language changes, e.g https://peps.python.org/pep-0436/ which is largely a CPython implementation detail.
Exactly! Would like to understand more how that came about. PEP exists for a reason.
.NET seems to have regularly changed the garbage collector over the years and I do not remember any similar surprises in production. I wonder why they have had better experience?
I thought that by now dynamic garbage collection was a known quantity so that making changes, outside of out right bugs, is fairly safe and predictable?
One thing Microsoft does really well is eating its own dogfood and Microsoft feeds a ton of .Net dogs.
So any change to GC starts with massive .Net MSFT code base so they get extremely good telemetry back about any downsides and might be able to fix it in time.
Did really well, unfortunately.
There is almost no dog fooding on Windows development since version 8, Typescript team rather rewrite the compiler in Go, Azure has plenty of Go, Rust and Java projects alongside .NET.
Microsoft does use Go/Rest/Java in places but they still have a ton of .Net.
Windows Development is not "We are not dogfooding", it's that incentives are misaligned with customer wants.
.Net team incentives are aligned with customer wants, provide a language that is highly performant and easy enough to write.
Oh, they really don't dogfood Windows development any longer, regardless of the incentives.
I have my WinRT 8, UAP 8.1, UWP 10, Project Reunion, .NET Native, C++/CX, C++/WinRT, XAML Islands, XAML Direct, WinUI 2.0, WinUi 3.0, WinAppSDK and what not scars to prove how they aren't dog fooding any piece of it in any meaningful manner.
Heck they keep talking about C++ support in WinUI 3, as if the team hasn't left the project and is now playing with Rust instead.
They managed that plenty of early WinRT advocates became their hardest critics, while not believing anything else they put out, like now this Windows K2 project.
Well, .NET is just not in the same class as Go and Rust.
Go is, essentially, nearly perfect at what it does - even if the language itself leaves much to be desired and would ideally be much safer.
Microsoft should up their game. They have a few research languages in development.
They've always been great with languages. Hopefully, they rise to the occassion.
The only thing Go has going for it was getting lucky with Docker and co, and UNIX/Plan 9/Inferno pedigree.
Now we're stuck with it in anything CNCF related.
I like my programming language flame wars just as much as the next guy but Go is a really easy language to get started with, while also being very fast. It's not just luck
> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
-- Rob Pike
Any reference to the biggest issues with Go in your opinion, and what are you comparing it to?
What? If you are talking web development, .Net is just about the same as Go. It's 100% Java OOP type writing but result is same, very performant API server.
Sure, Rust is completely different beast with different target system.
Java 1.5 kind of thing, with plenty error handling boilerplate, errors as strings, and SCM urls straight in the code...
Actually there’s a change to dotnet 9 with how it handles the heap and GC which caused major issues for us.
I’ll confess the reason it hit us so hard is because the code quality was so low and wasteful on allocations that it didn’t hide the problem as well as previous versions.
I remember working on the Windows Update back-end at Microsoft around 2005, and we had a problem where it would freeze up periodically, and not surprisingly that turned out to be caused by GC. But we noticed it before shipping, and we just tweaked some GC parameters.
So I think it was not a big problem for .Net because it gave you enough control over GC, and because people tested their code before putting it in production.
I think reverting is not problem per se, but releasing a highly problematic version without proper testing in such an essential component is.
Yeah they noted that it went without PEP. Looks like a PEP will come now if it maintains at par perf.
If I understand correctly, this is one of the changes that caused the regression:
https://github.com/python/cpython/pull/117120
This is the first time I came across a change (a big one) that was implemented without passing through PEP. I thought it was standard.
If using containers I believe this change was pushed in image python:3.14.5-slim-trixie
Python is such a mess.
Any language of Python’s size and popularity will be a mess, the only difference is what parts of it.
All these issues were known in previous attempts for removing the GIL. But if Instagram/Meta want it, everyone stands to attention and finds out the obvious problems years later. Kind of like in geopolitics.
I hope Meta switches Instagram to PHP/Hack so they leave Python alone.
The no-GIL work (free-threading) is unrelated to this incremental GC work.
Free-threading actually uses its own, separate GC: https://labs.quansight.org/blog/free-threaded-gc-3-14
In the world of AI written code, Python just doesn’t make sense. Converted about 100k lines in the last few months to golang and the performance is life changing. Curious if we will see global Python adoption fall by 75% or more in the next few years.
I think humans are still accountable for the code generated by agents.
You are free to switch language but you still need to understand it.
With a similar amount of experience with both languages I found Go much easier to read. I've always been a bit miffed why Python is seen as easy to read for experienced developers. I get the syntax is good for short code or people with little experience but my experience is those readability benefits went away quickly with time or complexity.
Why are you miffed about it? I legitimately hate reading golang with passion and find python to be pretty intuitive, outside of the odd ambitious list comprehensions. I worked in a golang shop for several years, so it's not just an familiarity situation either.
We are just different. That's not something to be mad about.
In my opinion most interpreted languages today tend to produce very dense code. Fancy call chains and closures interleaving. If you look for a subtle bug those are hard to reason about, you have to know the details of a lot of different APIs.
Go is verbose partly for that reason, but a silly loop is a silly loop. The constraints are clear, you only have to do the logic.
Python is a garbage language. Dynamic types are a disaster for maintaining large codebases and we waste enormous amounts of compute running large systems with it.
> Python is a garbage language. Dynamic types are a disaster
Python has gradual type system.
We should all go back to writing assembly
> X is a terrible language because of the lack of static analysis available.
> (Mocking) Yes, that's why we should go back to Y with even worse static analysis.
Sure
No we should write one of the many modern programming languages that handle certain projects way better, including kotlin, go, or Java. The only things python is best in class at are scripting and as a harness for high performance c++ or fortran.
What about the projects that python handles better?
Any language that uses error codes instead of exceptions is a non-starter for me. Produces code that craps all over the happy path.
Python has a different problem: it is slow as f---. I did a micro benchmark comparison against 5 other languages in preparation for my python replacement language. Outside of dictionary lookups, it is 50-600 times slower than C depending on the workload.
Go, Rust etc are fine. They land at 1.25-3x slower than C. But I prefer the readability of python minus its dynamic nature.
I think we'll eventually be generating machine code directly. But until then we should be using code that our team can actually read and understand. If you know go, then that works you, Not everyone does.
Doubt it. LLMs will always be more expensive per-token than compilers, and high level languages need fewer tokens than machine code. Also, type systems, warnings, overlap with natural language in names - those are very useful.
nothing about the performance characteristics of python changed with AI so why would you use python over golang if performance is a requirement/bottleneck? Trying to understand the reasoning as to me golang and python are equally simple to write and understand.
If language X is a persons comfort zone, that person will often default to it. Python is certainly more widespread then go.
Also, even if it looks like that to you, there are still people that write code with their own hands.
Regardless of whether golang and python are actually equally simple, python certainly has the reputation of being easier to write and read than almost any other language. That is a big part of its popularity.
Python is not really simple though, the semantics are actually quite bonkers. It just has "simple"-looking syntax, but that only helps you for trivial programs where the bonkers semantics does not get in the way.
What about the semantics are bonkers in your opinion?
For personal projects, yes. For code going into production, you still need human code review, and that has to happen in a language that the humans you've hired are comfortable with. One day, we'll all be YOLOing vibe code straight into production, but that day is not today.
But that day is not today .. unless you are working for microslop or clownflare ? Half-kidding, sorry :)