I would bet money that Elastic uses a Terraform provider for Github and they marked repos private in an automated way, and the reverse API operation doesn't function in the same way.
It's possible that any delay is them trying to figure out how to get Terraform back to a good state rather than making the repos public being this inherently hard thing.
> It's possible that any delay is them trying to figure out how to get Terraform back to a good state rather than making the repos public being this inherently hard thing.
I don't know if it is Terraform, but if that was the case, it would actually be trivial to rollback the IaC terraform itself, or even from a previous statefile.
All things considered it doesn't seem to be a destructive mistake, and not 18:00 on a Friday :)
My experience with non-AWS providers in TF is that they're less maintained and buggy - in theory this should be easy, but people seem very afraid of TF and I can picture this getting chaotic.
But you're quite right that if they're comfortable enough, they should go into S3 and get a statefile they were happy with!
Permissions aren't the problem. But the upstream source of all the forks is wrong if you take a repo private, all stars from folks outside your organization are gone,... So you need GitHub support to restore everything.
And the details how it happened are a bit different but it was a configuration error (making things too secure )
Fair enough, and what I said was wrong too, so it's turtles all the way down!
I butchered Cunningham’s Law, thank you for correcting me, though the name of the law is confusing since it was McGeady who pointed out: "The best way to get the right answer on the Internet is not to ask a question, it’s to post the wrong answer.", which is what I did, and what the parent comment did, it is attributed to the great Ward Cunningham, creator of the first Wiki, who is a rightful dude. I have nerd-sniped myself on this. At least I'm less wrong now, thank you.
I also kinda wonder if they accidentally removed a user or some credential that has the permissions needed to make things public again, a TF change could involve both the public/private change and user account changes. Could be a bit to look up an admin account to fix things.
You’re probably right, but I’m not sure I understand the point of managing a GitHub organization in Terraform, that sounds harder than it needs to be. Are there some reasons I’m missing?
This is probably the biggest impact, I just wrote on another comment that this didn't seem to be too destructive but it's a bit more of a headache than it seems.
> Current forks will remain public and will be detached from this repository.
If I'm not mistaken when they are back public again everyone will need to update their origin ?
If you have a local git origin set to the HTTPS URL of the parent repo, I don't think anything would change when it comes back online.
This sentence seems to be about GitHub internally treating forked repos as branches of the original repo to save space (as well as displaying "fork of X" in the UI), and there's no way to manually set that relationship other than the fork button. GitHub would have to manually reset that if they wanted to. (This setup is what allows those fake commit URLs where you use the URL of the original repo plus the commit hash of a commit in a fork.)
That's a foss tool though, elastic is a for profit company and might be willing to pay money to have it restored/might already be a paying GitHub customer.
And GitHub might have implemented something that let's them restore it after it wasn't possible before, as it's been 2 yrs since.
Lots of possibilities that might change the outcome if elastic is lucky
"In our case, however, they refuse to do that, citing undesirable side effects and the cost of resources. We even offered GitHub financial compensation for any resources required. But, sadly, they declined. They have other priorities than resurrecting the community of one of the oldest and most popular community projects on their platform."
Certainly possible they implemented a way to do this over the last 2 years.
No need to spread rumours: It was a configuration error. GitHub support is helping with restoring everything, since the fork network, stars,... are otherwise all off.
And if you leak credentials, you'd just have to rotate them. Taking the repo offline would probably be too late anyway and causes a major mess, so not something I could recommend for popular repos
It would be a "rumour" if I had stated that it was the truth. If it's not the right explanation then fine, but I see no need for defensiveness. I mentioned that possibility not to criticise elastic, but because it's a security property of GitHub that very much violates the principle of least surprise and that I suspect of causing a security problem for at least one of my previous employers. Well worth spreading awareness IMO.
If they posted it on an error or outage page then they probably didn't mean to set it that way, and that implies that there was a non-obvious mistake. They might be doing something silly with their permissions.
And that is presuming that this is some sort of technical issue.
"As part of an internal change task" is the justification listed. Maybe this is a genuine accident.
Someone paranoid might think that the for-profit management at Elastic is trying to pull some of their previously free software behind a paid-for product. Perhaps they accidentally marked all repos private when they only intended to make a few of them private. They have had beef with AWS in the past where they changed their licensing due to things AWS was doing. So I'll fully believe that it was a genuine accident if all the formerly public repos become public again.
It's a configuration error (sorry!). Also with thousands of forks this would be a pretty pointless operation. Once something is out (and that includes a license), you cannot just take it back — it will be there forever.
I seem to remember someone posting about this once -- you lose all your stars / followers when going public -> private, and they're not restored when you go back.
I would bet, as a result of this and other things like fork management, that they'll be working with GitHub support to try to reverse the go-private and all its consequences.
If it's this: https://news.ycombinator.com/item?id=41060102
Then they will need to delete(or rename), remake the repos and push again. Any security problem would also require doing some due diligence to make sure you really squashed it.
Yeah. This was a configuration error. Keys you just rotate. Making repos private accidentally creates a whole new mess with forks, stars,... Not recommended
Is it really a big deal for megaproject? Not like someone wouldn't know what elasticsearch is or where to look for source code. Yeah, there are billions of people on planet and some may be casually exploring some repository tops and stumbling upon and getting to know elasticsearch that way. But if one wants to find solution for search, I don't think github stars matter.
Yeah, stars/watches don't seem like a huge deal to me - but the fork part is pretty bad. It breaks the upstream connection between repos for all the forks.
8 years ago someone accidentally deleted the elasticsearch repository (thinking it was their private fork ). Back then everything was restored, so I hope we get there again this time too
I'm the maintainer of a reasonably popular project (~9K stars) and it's certainly a nightmare scenario given that I consider the stars to give it credibility in a crowded space.
I guess ES has enough brand recognition. But it'd be killer for a lesser known project. While stars don't matter per se, they do give a project some credibility over a project with 0 stars or 50 stars.
If someone looked at both Elasticsearch and Opensearch and was new to the area, they'd think Opensearch was the original and ES the fork.
I would bet money that Elastic uses a Terraform provider for Github and they marked repos private in an automated way, and the reverse API operation doesn't function in the same way.
It's possible that any delay is them trying to figure out how to get Terraform back to a good state rather than making the repos public being this inherently hard thing.
> It's possible that any delay is them trying to figure out how to get Terraform back to a good state rather than making the repos public being this inherently hard thing.
I don't know if it is Terraform, but if that was the case, it would actually be trivial to rollback the IaC terraform itself, or even from a previous statefile.
All things considered it doesn't seem to be a destructive mistake, and not 18:00 on a Friday :)
My experience with non-AWS providers in TF is that they're less maintained and buggy - in theory this should be easy, but people seem very afraid of TF and I can picture this getting chaotic.
But you're quite right that if they're comfortable enough, they should go into S3 and get a statefile they were happy with!
Permissions aren't the problem. But the upstream source of all the forks is wrong if you take a repo private, all stars from folks outside your organization are gone,... So you need GitHub support to restore everything.
And the details how it happened are a bit different but it was a configuration error (making things too secure )
[I work for Elastic]
This bears out the idea that the fastest way to get the truth on the Internet is to say something wrong first.
Well, it was a bad change. But we wouldn't want the wrong story make it worse. It was "just" an error in our configuration.
Fair enough, and what I said was wrong too, so it's turtles all the way down! I butchered Cunningham’s Law, thank you for correcting me, though the name of the law is confusing since it was McGeady who pointed out: "The best way to get the right answer on the Internet is not to ask a question, it’s to post the wrong answer.", which is what I did, and what the parent comment did, it is attributed to the great Ward Cunningham, creator of the first Wiki, who is a rightful dude. I have nerd-sniped myself on this. At least I'm less wrong now, thank you.
I also kinda wonder if they accidentally removed a user or some credential that has the permissions needed to make things public again, a TF change could involve both the public/private change and user account changes. Could be a bit to look up an admin account to fix things.
You’re probably right, but I’m not sure I understand the point of managing a GitHub organization in Terraform, that sounds harder than it needs to be. Are there some reasons I’m missing?
> I’m not sure I understand the point of managing a GitHub organization in Terraform
+1 here
The pendulum went from "no tools, we manage everything manually" to "even smoke pauses need to be tracked and versioned".
Found https://status.elastic.co/incidents/9mmlp98klxm1
"Some public repositories are temporarily unavailable"
This may not be fully reversible: https://docs.github.com/en/repositories/managing-your-reposi...
"GitHub will detach public forks of the public repository and put them into a new network."
"Stars and watchers for this repository will be permanently erased"
This is probably the biggest impact, I just wrote on another comment that this didn't seem to be too destructive but it's a bit more of a headache than it seems.
> Current forks will remain public and will be detached from this repository.
If I'm not mistaken when they are back public again everyone will need to update their origin ?
If you have a local git origin set to the HTTPS URL of the parent repo, I don't think anything would change when it comes back online.
This sentence seems to be about GitHub internally treating forked repos as branches of the original repo to save space (as well as displaying "fork of X" in the UI), and there's no way to manually set that relationship other than the fork button. GitHub would have to manually reset that if they wanted to. (This setup is what allows those fake commit URLs where you use the URL of the original repo plus the commit hash of a commit in a fork.)
If it's a serious enough issue, GitHub staff can almost certainly still step in and manually restore all that stuff if needed.
It's back with 181 stars.
https://web.archive.org/web/20241009043953/https://github.co... And for those like me who don't pay attention it had 69k stars earlier in the month.
Well maybe it’ll take a day or two before they can get someone to restore the stars?
Now probably Github can update the starts in the database with an adhoc query.
They didn't in this case: https://news.ycombinator.com/item?id=31033758
That's a foss tool though, elastic is a for profit company and might be willing to pay money to have it restored/might already be a paying GitHub customer.
And GitHub might have implemented something that let's them restore it after it wasn't possible before, as it's been 2 yrs since.
Lots of possibilities that might change the outcome if elastic is lucky
From the referenced article:
"In our case, however, they refuse to do that, citing undesirable side effects and the cost of resources. We even offered GitHub financial compensation for any resources required. But, sadly, they declined. They have other priorities than resurrecting the community of one of the oldest and most popular community projects on their platform."
Certainly possible they implemented a way to do this over the last 2 years.
Yep. I just checked and the repo stands at "0 forks" and "194 stars". I don't know if GitHub can undo that somehow.
ISTM GitHub rolling back such a mistake for a Microsoft repo, but for nobody else.
it's at 24k forks now
Ouch.
This is even more painful for small-to-midsize projects whose star counts help distinguish them.
I'm betting someone pushed a secret key somewhere and they made the repo private to try to limit the damage...
Could be, or maybe that just discovered the GitHub private repo leak issue that was discussed a few months back:
https://news.ycombinator.com/item?id=41060102
No need to spread rumours: It was a configuration error. GitHub support is helping with restoring everything, since the fork network, stars,... are otherwise all off.
And if you leak credentials, you'd just have to rotate them. Taking the repo offline would probably be too late anyway and causes a major mess, so not something I could recommend for popular repos
[I work for Elastic]
It would be a "rumour" if I had stated that it was the truth. If it's not the right explanation then fine, but I see no need for defensiveness. I mentioned that possibility not to criticise elastic, but because it's a security property of GitHub that very much violates the principle of least surprise and that I suspect of causing a security problem for at least one of my previous employers. Well worth spreading awareness IMO.
And it's back up again.
> Our teams are working on the restoration path for returning our impacted git repositories to a public state.
Cant they just make them public? Am i missing something?
If they posted it on an error or outage page then they probably didn't mean to set it that way, and that implies that there was a non-obvious mistake. They might be doing something silly with their permissions.
And that is presuming that this is some sort of technical issue.
...or they mistakenly droppen them. :)
"As part of an internal change task" is the justification listed. Maybe this is a genuine accident.
Someone paranoid might think that the for-profit management at Elastic is trying to pull some of their previously free software behind a paid-for product. Perhaps they accidentally marked all repos private when they only intended to make a few of them private. They have had beef with AWS in the past where they changed their licensing due to things AWS was doing. So I'll fully believe that it was a genuine accident if all the formerly public repos become public again.
unlikely, over the summer they announced that they were going to be more opensource, <https://www.elastic.co/blog/elasticsearch-is-open-source-aga...>
It's a configuration error (sorry!). Also with thousands of forks this would be a pretty pointless operation. Once something is out (and that includes a license), you cannot just take it back — it will be there forever.
[I work for Elastic]
I'm guessing someone accidentally pushed something they shouldn't have
Not that easy as there are some consequences when you move from public repo to private repo.
I seem to remember someone posting about this once -- you lose all your stars / followers when going public -> private, and they're not restored when you go back.
You can see this now on the link in the post. The repo is currently sitting at
Watch: 194 Forks: 0 Stars: 183
I would bet, as a result of this and other things like fork management, that they'll be working with GitHub support to try to reverse the go-private and all its consequences.
If it's this: https://news.ycombinator.com/item?id=41060102 Then they will need to delete(or rename), remake the repos and push again. Any security problem would also require doing some due diligence to make sure you really squashed it.
it could be that they might have discovered some credential leak or secrets leakage in the repo and they are fixing it right now.
I don't think so, if you accidentally leak an api key you invalidate that specific key.
Yeah. This was a configuration error. Keys you just rotate. Making repos private accidentally creates a whole new mess with forks, stars,... Not recommended
[I work for Elastic]
It's perhaps an issue on GitHub's end
Pretty sure they would call that out in the status update if it was out of their control.
Why? The issue affects their users regardless of whose fault it is.
This is a brutal mistake.
"Stars and watchers for this repository will be permanently erased, which will affect repository rankings."
https://docs.github.com/en/repositories/managing-your-reposi...
Is it really a big deal for megaproject? Not like someone wouldn't know what elasticsearch is or where to look for source code. Yeah, there are billions of people on planet and some may be casually exploring some repository tops and stumbling upon and getting to know elasticsearch that way. But if one wants to find solution for search, I don't think github stars matter.
At the very least it would be embarrassing that OpenSearch has 9.7k stars and ElasticSearch has 187 stars.
It also seems to factor into github search, OpenSearch is on the first page and ElasticSearch is nowhere to be found.
https://github.com/search?q=search&type=repositories
When I see that a project as been forked I use these stats to help select which fork to go with.
Yeah, stars/watches don't seem like a huge deal to me - but the fork part is pretty bad. It breaks the upstream connection between repos for all the forks.
GitHub support can (and currently is) restore all of that. The fork network should already be fixed for Elasticsearch again
[I work for Elastic]
Good to know. There was a similarly popular project that didn't get this benefit (https://news.ycombinator.com/item?id=31033758).
8 years ago someone accidentally deleted the elasticsearch repository (thinking it was their private fork ). Back then everything was restored, so I hope we get there again this time too
I hope so too.
I'm the maintainer of a reasonably popular project (~9K stars) and it's certainly a nightmare scenario given that I consider the stars to give it credibility in a crowded space.
You can read about one experience from a few years ago here: https://news.ycombinator.com/item?id=31033758
I guess ES has enough brand recognition. But it'd be killer for a lesser known project. While stars don't matter per se, they do give a project some credibility over a project with 0 stars or 50 stars.
If someone looked at both Elasticsearch and Opensearch and was new to the area, they'd think Opensearch was the original and ES the fork.
OpenSearch
All the way.
It's back now but I'll add we have private clones of everything we use and automation that pulls them daily. Has been very handy over the years.
[flagged]
[flagged]