> enterprise SSO solutions like Okta are not free for users and cost a lot of money for organizations to implement and use.
There are free and open source solutions like Keycloak and Zitadel. I don't dispute they are less common than Okta and Entra, but the definitely exist and are deployed in the real world. My workplace (state government) uses Keycloak for example.
Another thing that the article doesn't really touch is that SSO is locking a security best practice, important for an organization of any size, behind a paywall. With SSO, when someone leaves the organization you can disable their singular account and be confident they are locked out of your shared folders, gitlab, jira, etc, etc, rather than having to manually track down and disable each one, with a high likelihood of missing something. This is important for an organization of any size > 1 from a bootstrapped startup all the way to fortune 500. Hiding it behind higher cost makes it more likely that an org will try to do without and have a security breach as a result.
I also take issue with:
> Developing and maintaining SSO solutions requires significant investment in research, development, and infrastructure.
Having done it myself, this is overstated. No feature is free but implementing a SAML or OAuth flow is not THAT much work, nor does it represent a huge amount of ongoing maintenance.
I actually don't mind the SSO tax too much in cases where it's the differentiator between free or open source vs paid. I find it far more egregious when it's a product that already has a cost and SAML auth jacks up the price 2-10x. I don't think the blog post is a particularly good discussion of the tradeoffs though.
Fair enough. I couldn't find information of self-hosting enterprise on their site, but did in the GitHub repository [1] and 200GB is indeed a lot. At the same time it's also a non-starter for me. I'm not going to install "enterprise" anything where I'm going to start depending on it, and one day the price will go up to ???.
If your platforms enterprise offering includes having SSO as a value driver to upgrade, you've defined your products value proposition wrong.
Ahh you want to make it easier to enable this in your org, in order to get better adoption and ensure the data in our app is more secure, yeah you're going to need to pay us for that.
| The way we think of it is, are you a large enterprise that is already spending a lot of money on security and SSO solutions like Okta? If yes, you should be able to pay us as well for the same level of security.
| Vendors need to recoup costs
| Industry standards: The SSO tax has become an industry standard
Well in that case fine /s
1 - is OpenObserve providing SSO security for ALL your applications, no. Is it doing SCIM, Identity Governance, provisioning, no... its like saying you pay for a sandwich, why dont you pay for the door you used to come into the shop as well. Door tax.
I bet they don't charge you to recoup costs on implementing a JS library? Why are they 'recouping costs' on adding support for OIDC/SAML standards. Build your solution to support SAML/SCIM and OAuth, allow anyone to consume it.
Why?
Adoption and security. Anyone who's a Google Workspace or Microsoft shop has an IDP (albeit basic but OK). Most orgs see the IDP capability there as free. They are then seeing the ability to leverage it as a paid offering in the SaaS apps they buy. So on the one hand, the Identity Provider is free, but the SSO endpoint on the app is paid? Wild.
Also, this is wild:
| For our cloud service we provide SSO in our free tier for following providers with plan to support more in future: Google,GitHub,GitLab,Microsoft
This is great, well done.
| SAML and OIDC are available in our enterprise tier.
WTF? The built out integrations that you had to make UI elements for, offer free (that vendor recoup argument died here). The ones that are generic, are paid for. Ahhh thats right, the generic ones are the ones that let you use Okta, Ping, OneLogin, Keycloak etc etc. Got it, the "valuable" ones.
Funny I was actually shopping around for an logging platform yesterday and ended up going with Grafana. The thing that sold it (for now) is their generous free tier, though if I had to self-host (which I intend to do eventually), this seems like a much easier thing to self host (and I really really like that it's just one binary)
I'm really starting to get sick of companies that claim they operate at petabyte at scale and find you need to spend 400k a month to support that scale.
How many open source log systems work at PB scale given any number of resources? Also FWIW, OpenObserve can ingest data at 28 MB/Sec/Core (We are working on optimizing it even more) and ingesting 1 PB of data would cost just $435 based on on-demand prices (AWS m7g family).
That doesn't answer the question of who? A (rightly) cynical reading of what you posted could just be "thousands of active deployments" you did for yourself to prove benchmarks.
Compute power is required to process and store the incoming data.
It's not "only 28 MB/Sec/Core". Try doing same with Splunk/Elasticsearch - You won't go past 5 MB/Sec/Core (Typically it will be lower) on their best day.
Too much to give all details in an HN thread. To simplify the conversation, Data will be persisted and usable for individual searches and aggregations. I would welcome you to our slack workspace for any further questions you may have - https://short.openobserve.ai/community
We have spent over 2 years building OpenObserve into a simple, highly usable and efficient observability tool. You could run it using a single binary that provides all the functionality of logs, metrics, traces, front end monitoring, dashboards (18 different chart types), alerts and pipelines.
OpenObserve is being used by startups, mid tier enterprises and fortune 100 companies. There are thousands of active installations of OpenObserve globally.
Folks have replaced Elasticsearch, Splunk, Graylog, Datadog , Newrelic and more for OpenObserve.
Comment from a user -
We moved from 5 node OpenSearch cluster to single node OpenObserve and measured using our actual everyday queries, which are reasonably complex queries (1 to 5 conditions applied) over our real logging data. We see that typically they complete in about the same time.
OpenObserve costs us 10 times less though (instances + storage)
Also, we are currently working on replacing one of the world's largest splunk installations.
p.s. I am one of the maintainers of OpenObserve. Feel free to ask questions. I will be happy to answer them. You can also visit our slack workspace at https://short.openobserve.ai/community for discussions.
How does this better than grafana?
Loki -> logs
Tempo -> traces
Prometheus/mimir -> metrics
I know personally of several companies which use this stack with tens of thousands of nodes. Everything runs on object storage which means it is easy to scale up, and you can move between storage providers as long as they implement the S3 API.
The grafana ecosystem is very sticky as well. If I download some random helm chart it most likely will come with a grafana dashboard which I can easily import and instantly have dashboards and alerts for the helm chart.
I see you address this in a blog post:
“You will also hear of LGTM stack (Loki, Grafana, Tempo, Mimir) which is a pretty good stack for observability. Each of these components are separate open source tools built by grafana labs.”
Not very convincing why I wouldn’t go for the LGTM stack which has been proven to be effective.
For those looking at much more simplicity, and much higher performance OpenObserve is the way to go. Many folks have moved from Loki to OpenObserve due to performance issues with Loki. Many have moved from LGTM stack completely to OpenObserve. Many have chosen to use Grafana as a front end for OpenObserve too.
Take a look at how easy it can be to build dashboards in OpenObserve.
It takes time for community and ecosystem to build for great products. Grafana started in 2014. OpenObserve started in 2022.
In all likelihood you are not going to get convinced by that. You did not switch to using LGTM stack because grafana gave you a benchmark of LGTM against what you were using previously.
We run benchmarks internally and will publish them once we are ready for it, but are unlikely to benchmark someone else's product. Benchmarking someone else's product always leads to a conversation similar to - You ran your benchmark in the most optimized way but used our non optimized settings or a version of that.
People who use OpenObserve like it for it's ease of setup, ease of management, high performance and rich feature set.
Especially for logs, grafana and loki is no match in terms of features and performance when it comes to OpenObserve. I will let you test it if you are curious and have some spare time.
OpenObserve is used by people ingesting MBs of data in their basement to PBs of data in large clusters in AWS, Azure, GCP and other cloud environments.
Grafana has always been the lingua-franca of telemetry so I think if you want real traction with people like me you are going to need to publish a publicly verifiable benchmark than OpenObserve can ingest a billion metrics streams, which mimir could do several years ago: https://grafana.com/blog/2022/04/08/how-we-scaled-our-new-pr...
Like I said, the industry is small and I know several colleagues running LGTM at large scales (10k-100k nodes) so I know the system works and it is a safe bet.
For one thing - From their website - Our powerful ingestion engine has a proven track record of handling 10TB+ data ingestion per day.
OpenObserve clusters can ingest PBs of data every day. While more can be discussed - I would rather focus on what are your needs when it comes to observability. Let's talk about them. I will be more excited to answer those questions.
That's an interesting take for OpenObserve, but for me in an ops role it misses the compatibility. I appreciate that with Grafana I can choose a specific backend and configure it as required in my environment. I know other systems can reach the same database because they have known APIs. I also know I can move to another store with the same front-end if the owners pull an Elastic/Redis.
OO integration may save me a couple of days of setup, but long-term it's a dangerously limiting idea / lock-in.
used this on self-hosted and ingested 150GiB daily and absolutely no issues, fancy UI and more buttons are not needed if you get the value from ingestion speed.
By using object storage (Think s3 and similar) and not replicating data for HA (Not needed if using s3) which is done by legacy systems like Elasticsearch and Splunk.
just forwarding the hosts logs from journald to a openobserve running in a vm on that host requires using an agent that will cause you more reasons to have to observe a log than anything else.
And syslog/syslogng are also problematic to ingest.
OpenObserve is built for centralized logging - Not really for installing it on every linux host. If that is your use case, I would recommend you to look for other tools.
Most people who talk about SSO Tax don't really care for it's values but rather want free stuff. I have had conversations with multi-billion dollar companies who would avoid paying a single dollar to support open source companies and bring SSO tax into conversation.
On our part OpenObserve offers free SSO on our cloud service to anyone and Free SSO for anyone using enterprise version if they ingest under 200 GB/Day (6 TB/Month).
The problem with not offering SSO on lower tiers is that it can make it hard to test into a new service. I might want to try out a new tool with one team for a couple of months and see how it goes before recommending adoption for a broader group. I don't want to have to sign a year-long six-figure contract just to try something out. Sometimes you can work out a trial period with the sales team, but that's not always easy, and it puts a strict ticking clock on things that doesn't work in all situations.
> ...conversations with multi-billion dollar companies...
Slight clarification here. Might not apply in OpenObservability's case but might help others on their journey to enterprise sales with their projects.
Those are typically conversations with managers holding $X purchasing authority, typically like $500K for a US director'ish level, within multi-billion dollar companies. These managers usually aren't averse to spending on open source projects. They're averse to cutting a check not tied to a support contract with responsive, polite, helpful support with published support policies at 0300h local time on a break-fix line with 75 other people from other support teams in the company watching. A surprising number of open source projects won't offer that guarantee, and instead only offer the option to "donate" with vague promises of priority support. More projects are getting better at this more recently, but it takes a surprising amount of red tape to onboard as a vendor into these organizations, and a lot of open source teams don't have the appetite for putting up with that.
Until kind of recently, the conversation switching to the SSO Tax is really about accessing that level of guaranteed support delivery.
Thanks @yourapostasy . Agree with you for the most part.
Not all managers are averse to paying, but many are. I have had discussions with Director/Sr. Director and VP level folks in these companies. I have been paid and I have been denied.
Our biggest customer is a fortune 10 company and we are able to offer the kind of support that they need. It indeed takes a lot to provide that kind of support, though, and would be difficult for most small open source projects to do.
They're entitled to their business model. They're not entitled to it working. They're not entitled to someone figuring out a business model for them if people don't like it.
SSO often costs quite a lot to maintain, given how widely varied the systems are. Seems reasonable to charge for an optional high-complexity and high-maintenance-burden feature.
Are you referring to the development cost or just the "keep SSO wired to other orgs in for our cloud product"? Development-wise, SSO standards don't change much and aren't terribly difficult to get up an running if you stick to oauth and saml.
By not offering that in a self-hosted open source version where the maintenance is delegated to the user turns this to a naked cash grab.
I think you don't understand the core argument re the SSO Tax, which is that security is a positive sum good, which is why it should not be the feature used for price discrimination.
Not all products have other good features to use for price discrimination, so I have some sympathy for vendors here, but I think it often indicates laziness in thinking about what they can use to do the necessary price discrimination.
I do understand it's super important for security, and I want large companies who have ample money and spend a lot on security to pay me as well for it. If you are running OpenObserve in your basement or are a small startup you get it for free in OpenObserve and stay secure.
I want large companies to pay you too, but SSO is not a purely large company feature. There are plenty of companies with more than 10 developers that are not large companies.
We're in that position. We make B2B software and just increased to 12 devs. Due to security demands from customers, SSO is a must for products like this and it frequently forces us into the enterprise bin.
OpenObserve offers free SSO on our cloud service to anyone and Free SSO for anyone using enterprise version if they ingest under 200 GB/Day (6 TB/Month).
This should cover all companies with 10 developers.
So a small company with say 25 dev/employees with say 2 gb of data per day?
Edit: People here do get what you're saying - "This is the only way we can force some users to pay". What you're not hearing is "Either don't call out your software as FOSS, or if you do, figure out ways of price discriminating without hurting security for FOSS users."
No SSO in open-source, pass. I'll stick w/ Grafana
You should read - https://openobserve.ai/blog/sso-tax and https://openobserve.ai/blog/openobserve-vs-grafana
> enterprise SSO solutions like Okta are not free for users and cost a lot of money for organizations to implement and use.
There are free and open source solutions like Keycloak and Zitadel. I don't dispute they are less common than Okta and Entra, but the definitely exist and are deployed in the real world. My workplace (state government) uses Keycloak for example.
Another thing that the article doesn't really touch is that SSO is locking a security best practice, important for an organization of any size, behind a paywall. With SSO, when someone leaves the organization you can disable their singular account and be confident they are locked out of your shared folders, gitlab, jira, etc, etc, rather than having to manually track down and disable each one, with a high likelihood of missing something. This is important for an organization of any size > 1 from a bootstrapped startup all the way to fortune 500. Hiding it behind higher cost makes it more likely that an org will try to do without and have a security breach as a result.
I also take issue with:
> Developing and maintaining SSO solutions requires significant investment in research, development, and infrastructure.
Having done it myself, this is overstated. No feature is free but implementing a SAML or OAuth flow is not THAT much work, nor does it represent a huge amount of ongoing maintenance.
I actually don't mind the SSO tax too much in cases where it's the differentiator between free or open source vs paid. I find it far more egregious when it's a product that already has a cost and SAML auth jacks up the price 2-10x. I don't think the blog post is a particularly good discussion of the tradeoffs though.
I've read articles like that time and time again. Doesn't change my requirements.
I'm curious what are your requirements? Their SSO tax appears to be structured in such a way that only large enterprises would have to pay.
How so? This chart [1] has SAML under enterprise with no price tag other than "get in touch"
[1] https://openobserve.ai/pricing
I'm going off the article linked above: https://openobserve.ai/blog/sso-tax
Fair enough. I couldn't find information of self-hosting enterprise on their site, but did in the GitHub repository [1] and 200GB is indeed a lot. At the same time it's also a non-starter for me. I'm not going to install "enterprise" anything where I'm going to start depending on it, and one day the price will go up to ???.
[1] https://github.com/openobserve/openobserve?tab=readme-ov-fil...
God bless you my friend. Thanks for the comment.
If your platforms enterprise offering includes having SSO as a value driver to upgrade, you've defined your products value proposition wrong.
Ahh you want to make it easier to enable this in your org, in order to get better adoption and ensure the data in our app is more secure, yeah you're going to need to pay us for that.
| The way we think of it is, are you a large enterprise that is already spending a lot of money on security and SSO solutions like Okta? If yes, you should be able to pay us as well for the same level of security.
| Vendors need to recoup costs
| Industry standards: The SSO tax has become an industry standard
Well in that case fine /s
1 - is OpenObserve providing SSO security for ALL your applications, no. Is it doing SCIM, Identity Governance, provisioning, no... its like saying you pay for a sandwich, why dont you pay for the door you used to come into the shop as well. Door tax.
I bet they don't charge you to recoup costs on implementing a JS library? Why are they 'recouping costs' on adding support for OIDC/SAML standards. Build your solution to support SAML/SCIM and OAuth, allow anyone to consume it.
Why?
Adoption and security. Anyone who's a Google Workspace or Microsoft shop has an IDP (albeit basic but OK). Most orgs see the IDP capability there as free. They are then seeing the ability to leverage it as a paid offering in the SaaS apps they buy. So on the one hand, the Identity Provider is free, but the SSO endpoint on the app is paid? Wild.
Also, this is wild:
| For our cloud service we provide SSO in our free tier for following providers with plan to support more in future: Google,GitHub,GitLab,Microsoft
This is great, well done.
| SAML and OIDC are available in our enterprise tier.
WTF? The built out integrations that you had to make UI elements for, offer free (that vendor recoup argument died here). The ones that are generic, are paid for. Ahhh thats right, the generic ones are the ones that let you use Okta, Ping, OneLogin, Keycloak etc etc. Got it, the "valuable" ones.
Funny I was actually shopping around for an logging platform yesterday and ended up going with Grafana. The thing that sold it (for now) is their generous free tier, though if I had to self-host (which I intend to do eventually), this seems like a much easier thing to self host (and I really really like that it's just one binary)
does anyone use this?
I'm really starting to get sick of companies that claim they operate at petabyte at scale and find you need to spend 400k a month to support that scale.
Thousands of active deployments globally.
How many open source log systems work at PB scale given any number of resources? Also FWIW, OpenObserve can ingest data at 28 MB/Sec/Core (We are working on optimizing it even more) and ingesting 1 PB of data would cost just $435 based on on-demand prices (AWS m7g family).
That doesn't answer the question of who? A (rightly) cynical reading of what you posted could just be "thousands of active deployments" you did for yourself to prove benchmarks.
Machines I would use for benchmarking would go down after some time and won't be active.
Still didn't answer the "who" part.
We will publish many names on our website soon.
Why is it only 28 MB/core-second?
Is that production rate, inbound bandwidth, rate to persistence, rate to processed, or rate to display?
Compute power is required to process and store the incoming data.
It's not "only 28 MB/Sec/Core". Try doing same with Splunk/Elasticsearch - You won't go past 5 MB/Sec/Core (Typically it will be lower) on their best day.
To what state?
Suppose I have 28 GB of trace data in memory on a machine and then I fire that off. What do I have after 1000 seconds?
Do I just have a file of 28 GB of raw trace?
Do I have 28 GB of raw trace in memory ready to be indexed?
Do I have a data structure in memory ready to be searched?
Do I have the full trace information rendered on my screen (or a aggregated visualization derived after processing all the data)?
If it is the first, that would be ridiculously slow. If it is one of the latter ones, then it would depend on what querying operations are fast.
28 MB/core-second makes no sense without the context of what you can do quickly after the “processing” is done.
Too much to give all details in an HN thread. To simplify the conversation, Data will be persisted and usable for individual searches and aggregations. I would welcome you to our slack workspace for any further questions you may have - https://short.openobserve.ai/community
Thanks @thunderbong for the post.
We have spent over 2 years building OpenObserve into a simple, highly usable and efficient observability tool. You could run it using a single binary that provides all the functionality of logs, metrics, traces, front end monitoring, dashboards (18 different chart types), alerts and pipelines.
OpenObserve is being used by startups, mid tier enterprises and fortune 100 companies. There are thousands of active installations of OpenObserve globally.
Folks have replaced Elasticsearch, Splunk, Graylog, Datadog , Newrelic and more for OpenObserve.
Comment from a user -
We moved from 5 node OpenSearch cluster to single node OpenObserve and measured using our actual everyday queries, which are reasonably complex queries (1 to 5 conditions applied) over our real logging data. We see that typically they complete in about the same time. OpenObserve costs us 10 times less though (instances + storage)
Also, we are currently working on replacing one of the world's largest splunk installations.
p.s. I am one of the maintainers of OpenObserve. Feel free to ask questions. I will be happy to answer them. You can also visit our slack workspace at https://short.openobserve.ai/community for discussions.
How does this better than grafana? Loki -> logs Tempo -> traces Prometheus/mimir -> metrics
I know personally of several companies which use this stack with tens of thousands of nodes. Everything runs on object storage which means it is easy to scale up, and you can move between storage providers as long as they implement the S3 API. The grafana ecosystem is very sticky as well. If I download some random helm chart it most likely will come with a grafana dashboard which I can easily import and instantly have dashboards and alerts for the helm chart.
I see you address this in a blog post: “You will also hear of LGTM stack (Loki, Grafana, Tempo, Mimir) which is a pretty good stack for observability. Each of these components are separate open source tools built by grafana labs.”
Not very convincing why I wouldn’t go for the LGTM stack which has been proven to be effective.
By all means, if LGTM works for you stay with it.
For those looking at much more simplicity, and much higher performance OpenObserve is the way to go. Many folks have moved from Loki to OpenObserve due to performance issues with Loki. Many have moved from LGTM stack completely to OpenObserve. Many have chosen to use Grafana as a front end for OpenObserve too.
Take a look at how easy it can be to build dashboards in OpenObserve.
It takes time for community and ecosystem to build for great products. Grafana started in 2014. OpenObserve started in 2022.
What would convince me is if you show benchmarks between large LTGM deployments and OpenObserve.
Show me OpenObserve ingesting a few billion metrics streams and show me how query times are faster than mimir.
In all likelihood you are not going to get convinced by that. You did not switch to using LGTM stack because grafana gave you a benchmark of LGTM against what you were using previously.
We run benchmarks internally and will publish them once we are ready for it, but are unlikely to benchmark someone else's product. Benchmarking someone else's product always leads to a conversation similar to - You ran your benchmark in the most optimized way but used our non optimized settings or a version of that.
People who use OpenObserve like it for it's ease of setup, ease of management, high performance and rich feature set.
Especially for logs, grafana and loki is no match in terms of features and performance when it comes to OpenObserve. I will let you test it if you are curious and have some spare time.
OpenObserve is used by people ingesting MBs of data in their basement to PBs of data in large clusters in AWS, Azure, GCP and other cloud environments.
BTW, here is a story of a large EV company who moved to OpenObserve for traces and increased performance by a factor of 10x and reduced their cost at the same time - https://openobserve.ai/blog/jidu-journey-to-100-tracing-fide...
Grafana has always been the lingua-franca of telemetry so I think if you want real traction with people like me you are going to need to publish a publicly verifiable benchmark than OpenObserve can ingest a billion metrics streams, which mimir could do several years ago: https://grafana.com/blog/2022/04/08/how-we-scaled-our-new-pr...
Like I said, the industry is small and I know several colleagues running LGTM at large scales (10k-100k nodes) so I know the system works and it is a safe bet.
Had a pretty bad experience with this. The web app is very buggy and frustrating.
What bugs? Care to file a GitHub issue?
How is it compared to Signoz?
AGPLv3 versus MIT Expat + open core for one thing
https://github.com/openobserve/openobserve/blob/v0.12.1/LICE...
https://github.com/SigNoz/signoz/blob/v0.56.0/LICENSE
For one thing - From their website - Our powerful ingestion engine has a proven track record of handling 10TB+ data ingestion per day.
OpenObserve clusters can ingest PBs of data every day. While more can be discussed - I would rather focus on what are your needs when it comes to observability. Let's talk about them. I will be more excited to answer those questions.
How does it compare to Grafana suite?
You should read this - https://openobserve.ai/blog/openobserve-vs-grafana
That's an interesting take for OpenObserve, but for me in an ops role it misses the compatibility. I appreciate that with Grafana I can choose a specific backend and configure it as required in my environment. I know other systems can reach the same database because they have known APIs. I also know I can move to another store with the same front-end if the owners pull an Elastic/Redis.
OO integration may save me a couple of days of setup, but long-term it's a dangerously limiting idea / lock-in.
You are in the same danger with Grafana (Front end, Elasic/Redis fate and lock in) as you are with OpenObserve. No difference there.
The risk is more spread out between the projects though so the replacements required would be smaller.
used this on self-hosted and ingested 150GiB daily and absolutely no issues, fancy UI and more buttons are not needed if you get the value from ingestion speed.
How are you folks related to Anguilla? Couldn’t really find anything AI specific about OpenObserve so I am guessing the domain is for other reasons?
How does the storage cost for 3 nodes stay exactly the same as for 1 node for openobserve?
By using object storage (Think s3 and similar) and not replicating data for HA (Not needed if using s3) which is done by legacy systems like Elasticsearch and Splunk.
just forwarding the hosts logs from journald to a openobserve running in a vm on that host requires using an agent that will cause you more reasons to have to observe a log than anything else.
And syslog/syslogng are also problematic to ingest.
OpenObserve is built for centralized logging - Not really for installing it on every linux host. If that is your use case, I would recommend you to look for other tools.
Another one for the sso wall of shame - https://sso.tax/
How should companies monetize products? Maybe people should just go back to full commercial models, not sure.
Most people who talk about SSO Tax don't really care for it's values but rather want free stuff. I have had conversations with multi-billion dollar companies who would avoid paying a single dollar to support open source companies and bring SSO tax into conversation.
On our part OpenObserve offers free SSO on our cloud service to anyone and Free SSO for anyone using enterprise version if they ingest under 200 GB/Day (6 TB/Month).
The problem with not offering SSO on lower tiers is that it can make it hard to test into a new service. I might want to try out a new tool with one team for a couple of months and see how it goes before recommending adoption for a broader group. I don't want to have to sign a year-long six-figure contract just to try something out. Sometimes you can work out a trial period with the sales team, but that's not always easy, and it puts a strict ticking clock on things that doesn't work in all situations.
> Most people who talk about SSO Tax don't really care for it's values but rather want free stuff.
Most people who put out open source software don't really want to accept useful PRs (which implement OpenID) rather just want free distribution.
> ...conversations with multi-billion dollar companies...
Slight clarification here. Might not apply in OpenObservability's case but might help others on their journey to enterprise sales with their projects.
Those are typically conversations with managers holding $X purchasing authority, typically like $500K for a US director'ish level, within multi-billion dollar companies. These managers usually aren't averse to spending on open source projects. They're averse to cutting a check not tied to a support contract with responsive, polite, helpful support with published support policies at 0300h local time on a break-fix line with 75 other people from other support teams in the company watching. A surprising number of open source projects won't offer that guarantee, and instead only offer the option to "donate" with vague promises of priority support. More projects are getting better at this more recently, but it takes a surprising amount of red tape to onboard as a vendor into these organizations, and a lot of open source teams don't have the appetite for putting up with that.
Until kind of recently, the conversation switching to the SSO Tax is really about accessing that level of guaranteed support delivery.
Thanks @yourapostasy . Agree with you for the most part.
Not all managers are averse to paying, but many are. I have had discussions with Director/Sr. Director and VP level folks in these companies. I have been paid and I have been denied.
Our biggest customer is a fortune 10 company and we are able to offer the kind of support that they need. It indeed takes a lot to provide that kind of support, though, and would be difficult for most small open source projects to do.
> How should companies monetize products?
By charging for valuable, differentiated features.
Not by charging for undifferentiated, standardized, secure authentication.
That's the company's problem.
They're entitled to their business model. They're not entitled to it working. They're not entitled to someone figuring out a business model for them if people don't like it.
SSO often costs quite a lot to maintain, given how widely varied the systems are. Seems reasonable to charge for an optional high-complexity and high-maintenance-burden feature.
Are you referring to the development cost or just the "keep SSO wired to other orgs in for our cloud product"? Development-wise, SSO standards don't change much and aren't terribly difficult to get up an running if you stick to oauth and saml.
By not offering that in a self-hosted open source version where the maintenance is delegated to the user turns this to a naked cash grab.
How is this on the SSO tax wall of shame? They support it SSO on their free tier.
You should read this - https://openobserve.ai/blog/sso-tax
I think you don't understand the core argument re the SSO Tax, which is that security is a positive sum good, which is why it should not be the feature used for price discrimination.
Not all products have other good features to use for price discrimination, so I have some sympathy for vendors here, but I think it often indicates laziness in thinking about what they can use to do the necessary price discrimination.
I do understand it's super important for security, and I want large companies who have ample money and spend a lot on security to pay me as well for it. If you are running OpenObserve in your basement or are a small startup you get it for free in OpenObserve and stay secure.
I want large companies to pay you too, but SSO is not a purely large company feature. There are plenty of companies with more than 10 developers that are not large companies.
We're in that position. We make B2B software and just increased to 12 devs. Due to security demands from customers, SSO is a must for products like this and it frequently forces us into the enterprise bin.
OpenObserve offers free SSO on our cloud service to anyone and Free SSO for anyone using enterprise version if they ingest under 200 GB/Day (6 TB/Month).
This should cover all companies with 10 developers.
So a small company with say 25 dev/employees with say 2 gb of data per day?
Edit: People here do get what you're saying - "This is the only way we can force some users to pay". What you're not hearing is "Either don't call out your software as FOSS, or if you do, figure out ways of price discriminating without hurting security for FOSS users."