- The docs.rs docs are still building, but the docs from the recent RC are available [0]
- The Slint project have an example of embedding Servo into Slint [1] which is good example of how to use the embedding API, and should be relatively easy to adapt to any other GUI framework which renders using wgpu.
- Stylo [2] and WebRender [3] have both also been published to crates.io, and can be useful standalone (Stylo has actually been getting monthly releases for ~year but we never really publicised that).
- Ongoing releases on a monthly cadance are planned
Tangent, but Slint is a really cool project. Not being able to dynamically insert widgets from code was the only thing that turned me off of it for my use case.
That's pretty cool. I'm guessing it would need some tweaking to handle things like cookies, or does it just need a pointer to the cookiejar? I'm not too familiar with servo,
It depends on stuff like SpiderMonkey so not pure Rust.
It should be able to render JavaScript but I've seen it throw bugs on simple pages, no doubt because my vibe-coded thing is crap not because Servo itself can't handle them.
I have been building/vibecoding a similar tool and unfortunately came to the conclusion that in practice, there are just too many features dependent on the full Chrome stack that it's just more pragmatic to use a real Chromium installation despite the file size. Performance/image generation speed is still fine, though.
This should be the real benchmark of AI coding skills - how fast do we get safe/modern infrastructure/tooling that everyone agrees we need but nobody can fund the development.
If Anthropic wants marketing for Mythos without publishing it - show us servo contrib log or something like that. It aligns nicely with their fundamental infrastructure safety goals.
I'd trust that way more than x% increase on y bench.
Hire a core contributor on Servo or Rust, give him unlimited model access and let's see how far we get with each release.
As I see it, the focus should not be about the coding, but about the testing, and particularly the security evaluation. Particularly for critical infrastructure, I would want us to have a testing approach that is so reliable that it wouldn't matter who/what wrote the code.
AI as advanced fuzz-testing is ridiculously helpful though - hardly any bug you can in this sort of advanced system is a specification logic bug. It's low-level security-based stuff, finding ways to DDOS a local process, or work around OS-level security restrictions, etc.
I disagree. Thorough testing provides some level of confidence that the code is correct, but there's immense value in having infrastructure which some people understand because they wrote it. No amount of process around your vibe slop can provide that.
That's just status quo, which isn't really holding up in the modern era IMO.
I'm sure we'll have vibed infrastructure and slow infrastructure, and one of them will burn down more frequently. Only time will tell who survives the onslaught and who gets dropped, but I personally won't be making any bets on slow infrastructure.
I somewhat agree, but even then would argue that the proper level at which this understanding should reside is at the architecture and data flow invariants levels, rather than the code itself. And these can actually be enforced quite well as tests against human-authored diagrammatical specs.
If you don't fully understand the code how do you know it implements your architecture exactly and without doing it in a way that has implications you hadn't thought of?
As a trivial example I just found a piece of irrelevant crap in some code I generated a couple of weeks ago. It worked in the simple cases which is why I never spotted it but would have had some weird effects in more complicated ones. It was my prompting that didn't explain well enough perhaps but how was I to know I failed without reading the code?
It's extremely tempting to write stuff and not bother to understand it similar to the way most of us don't decompile our binaries and look at the assembler when we write C/C++.
So, should I trust an LLM as much as a C compiler?
Well if the big players want to tell me their models are nearly AGI they need to put up or shut up. I don't want a stochastically downloaded C compiler. I want tech that improves something.
Replicating Rust would also be a good one. There are many Rust-adjacent languages that ought to exist and would greatly benefit mankind if they were created.
So, since this is the top post on Hacker News, and the website's description is a bit too high level for me, what does Servo let me do? By "web technologies", does it mean "put a web browser inside your desktop app"?
Specifically, it's the browser engine that spun out of Mozilla's early efforts towards a rust-based browser, and is one of the motivating projects for the entire Rust ecosystem
Is there a table of implemented RFCs? Something similar to http://caniuse.com where we can see what HTML/JS/CSS standards and features are implemented? If it exists, I can't seem to find it. Closest thing seems to be "experimental features" page but its not quite detailed enough.
For those of you using a browser to generate PDFs, the Rust crate you should look into is Typst [1]. Regardless of your application language, you can use their CLI.
It takes some time to get used to their DSL to write PDFs, but nowadays with AI that shouldn't take too long.
> As you can see from the version number, this release is not a 1.0 release. In fact, we still haven’t finished discussing what 1.0 means for Servo
Wait, crate versions go up to 1.0?
EDIT: Sorry, while crate stability may be an interesting conversation, this isn't the place for it. But I can't delete this comment. Please downvote it. Mods feel free to delete or demote it.
The fundamental problem with Rust versioning is that 0.3.5 is compatible with 0.3.6, but not 0.4.0 or 1.0.0; when major version is 0, the minor takes the role of major and patch takes the role of minor. So packages iterate through 0.x versions, and eventually, they reach a version that's "stable".
If version 0.7 turned out to hit the right API and not require backward incompatible changes, releasing a version 1.0 would be as disruptive as a major version change to your users and communicate through version semantics that it is a breaking change.
Semver declares that version 0.x is for initial development where there is no stability guarantee at all. This is the right semantics for a versioning system, but Cargo doesn't follow this part of semver. Providing stability guarantees throughout the 0.x cycle inevitably results in projects getting stuck in 0.x.
This is one of my biggest gripes with Cargo. But Rust people seem to universally consider it a non-issue so I don't think it'll ever be fixed.
> The fundamental problem with Rust versioning is that 0.3.5 is compatible with 0.3.6, but not 0.4.0 or 1.0.0
That’s a feature of semver, not a bug :)
Long answer: You are right to notice that minor versions within a major release can introduce new APIs and changes but generally, should not break existing APIs until the next major release.
However, this rule only applies to libraries after they reach 1.0.0. Before 1.0.0, one shouldn’t expect any APIs to be frozen really.
To go further, semver provides semantics and an ordering but it says nothing about version requirement syntax. The caret operator to describe a range of versions is not part of the spec. It was introduced by initial semver-aware package managers such as npm or gem. Cargo decided to default to the caret operator, but it's still the caret operator.
In practice, there's no real issue with using the first non-zero component to define the group of API-compatible releases and most package managers agree on the semantics.
Eventually this will get cleared up. I’m close than I’ve ever been to actually handling this, but it’s been 9 years already, so what’s another few months…
The standard library has a whole bunch of tools to let them test and evolve APIs with a required-opt in, but every single ecosystem package has to get it right first try because Cargo will silently forcibly update packages and those evolution tools aren't available to third party packages.
Personally, I think the 0 major version is a bad idea. I hear the desire to not want to have to make guarantees about stability in the early stages of development and you don't want people depending on it. But hiding that behind "v0.x" doesn't change the fact that you are releasing versions and people are depending on it.
If you didn't want people to depend on your package (hence the word "dependency") then why release it? If your public interface changes, bump that major version number. What are you afraid of? People taking your project seriously?
0.x is not that you don't want people depending on it, you just don't want them to come and complain when you quickly introduce some breaking changes. The project is still in development, it might be stable enough for use in "real projects(tm)", but it might also still significantly change. It is up to the user to decide whether they are OK with this.
1.x communicates (to me at least) you are pretty happy with the current state of the package and don't see any considerable breaking changes in the future. When 2.x comes around, this is often after 1.x has been in use for a long time and people have raised some pain points that can only be addressed by breaking the API.
If you are at the point that other people can use your software, then you should use v1. If you are not ready for v1, then you shouldn't be releasing to other people.
Because this comment, "The project is still in development, it might be stable enough for use in "real projects(tm)", but it might also still significantly change." That describes every project. Every project is always in development. Every project is stable until it isn't. And when it isn't, you bump the major number.
I think we can come up with a reason why bumping the version number each breaking change isn't an elegant solution either: You would end up with version numbers in the hundreds or thousands.
Versioning is communication. I find it useful to communicate, through using version 0.x, "this is not a production ready library and it may change at any time, I provide no stability guarantees". Why might I release it in that state? Because it might still be useful to people, and people who find it useful may become contributors.
Any project may change at any time. That's why they bump from v1 to v2. But by not using the full precision of the version number, you're not able to communicate as clearly about releases. A minor release may not be 100% compatible with the previous version, but people still expect some degree of similarity such that migrating is not a difficult task. But going from v0.n to v0.(n+1) uses that field to communicate "hell, anything could happen, YOLO."
By releasing a library with version 1.0, I communicate: "I consider this project to be in a state where it is reasonable to depend on it".
By releasing a library with version 0.x, I communicate: "I consider this project to be under initial development and would advice people not to depend on in unless you want to participate in its initial development".
I don't understand why people find this difficult or controversial.
I was a little curious to see if there was any Tauri integration, and it looks like there is (tauri-runtime-verso) ... Not sure where that comes out size-wise compared to Electron at that point though. My main desire there would be for Linux/flathub distribution of an app I've been working on.
It depends on your use case. I wouldn't use it for a JS-heavy site. But if you have simple static content, it's probably enough. It's worth testing it out as a standalone app before integrating it as a library.
It doesn't crash as often as it used to few years ago. JS heavy sites might not work, and layout issues too. And internet gatekeepers cloudflare turnstile doesn't work.
Firefox incorporated parts of the Servo effort which were able to reach maturity. Stylo (Firefox's current CSS engine) and Webrender (the rendering engine) and a few other small components came from the Servo project.
Most other parts of Servo were not mature enough to integrate at the time Mozilla decided to end support for the project and didn't look like they would be mature enough any time soon. The DOM engine for example was in the early stages of being completely rewritten at the time because the original version had an architecture that made supporting the entire breadth of web standards challenging.
Keep in mind that you can continue adding Rust to Firefox without replacing whole components. It's not like Mozilla abandoned the idea of using more Rust in Firefox just because they stopped trying to rewrite whole components from the ground up.
Mozilla can't help it but be their own worst enemy. Ladybird may well never have happened if Mozilla just had kept working on Servo, and Ladybird is most definitely going to out compete Firefox when it reaches maturity, as Mozilla keeps on burning bridges with open source enthusiasts.
The problem with Mozilla is not just technical but cultural. The organization has been infected with managers. The managers want to keep their jobs more than they want Firefox to succeed. Clearly the solution is for the managers to fire themselves and allow the developers to run the show, but that was not going to happen.
Ladybird, by contrast, is a developer-lead open source project that has no such constraints. They also don't have a product yet but I'm sure the picture will be radically different in a few years.
To add to the other replies, Firefox was explicitly never going to consume all of Servo. It was always meant to be a test bed project where sub-projects could be migrated to Firefox. I suspect that the long term intent might have been for Servo to get to a point where it could become Firefox, but that wasn't the stated plan.
It's a great move. The early development of Rust aimed to support Servo. However, it's still disappointing that the script engine uses SpiderMonkey, which is purely C++.
It's best not to try and eat the elephant in one bite, which is perhaps where this project went wrong initially. Maybe this is a symptom of learning from past mistakes rather than a flaw.
My understanding is that the original intent of Servo was to be a way to develop features and port them over to Firefox itself (which did happen with at least a few features), and the relatively slower pace of developer is more due to Mozilla laying off everyone who was working on it. (Yes, presumably many of the same people are involved, but I would expect that being able to work on something full time without needing another source of income will end up making progress faster than needing to find time outside of work and balance between other things in life, ideally in a way that avoids burnout).
My understanding was that from day one the desire was to make a complete "web rendering & layout engine" and only pivoted to shipping smaller sub-components like Stylo (stylesheets) when it appeared to be "taking too long." I followed the project from the early days through the layoffs, but I may be misremembering things.
There are what, 5+ rust javascript engines that claim to be production-ready? Bolting one of those on in place of spider monkey seems like a reasonable future direction
What do you mean by "production ready" here exactly? In a web browser context, the JS engine is expected to have a high performance optimising JIT compiler. Do the existing Rust JS engines have that?
There's something to be said for the security benefits of not having a JIT though. Especially if you've used Rust for the engine you should have pretty solid security.
Yeah, having a code section that is writable and executable is a huge no-no from a security standpoint. JIT is a fundamentally insecure concept, just in general. By definition it's trading security for speed.
> Complete JavaScript execution pipeline from source code parsing to bytecode execution.
So it's a bytecode interpreter, not a JIT.
It might still be production ready for a bunch of use cases. I may use it as a scripting layer for some pluggable piece of software or a game. I wouldn't consider it appropriate for a "production ready web browser" which intends to compete with Firefox and Chrome.
EDIT: Also for some reason all its components are called v8_something? That's pretty off putting, you can't just take another project's name like that.. and from the author's Reddit comments it seems to be mostly AI slop anyway. I'm guessing Claude wrote the "production ready" part on the website, I wouldn't trust it.
I mean SpiderMonkey works, and presumably is fairly self-contained, so I can see why replacing that isn't attractive unless you believe you can make it significantly better in some way.
System web views were available as drag and drop components in VB6 two and a half decades ago. There's nothing "new" about that as a concept, and plenty of reasons to not want to use Blink/WebKit.
> System web views were available as drag and drop components in VB6 two and a half decades ago. There's nothing "new" about that as a concept
We are in a thread discussing a Rust library, logically, I was referring to the current approach in GUI rendering in the Rust space (such as Tauri and Dioxus).
> and plenty of reasons to not want to use Blink/WebKit.
Such as? Can you name a few objective reasons against Blink/WebKit (the technology) that does not involve just not liking Google/Apple?
Linux (GNU/Linux or whatever) doesn’t even have the concept of a system web view. The closest you might get to the notion is probably WebKitGTK which is perhaps the GNOME idea of a system web view, but it’s nothing like WebKit on macOS or WebView2 (or MSHTML in the past) on Windows for popularity or availability.
As a user of a desktop environment other than gnome-shell, I only have webkitgtk-6.0 installed because I chose to install Epiphany—it’s a good proxy for testing on Safari, which Apple makes ridiculously expensive.
That is not the meta. The meta is to ship blink so you only have to support a single version of a single web engine in stead of many versions of many different web engines.
Some notes:
- The docs.rs docs are still building, but the docs from the recent RC are available [0]
- The Slint project have an example of embedding Servo into Slint [1] which is good example of how to use the embedding API, and should be relatively easy to adapt to any other GUI framework which renders using wgpu.
- Stylo [2] and WebRender [3] have both also been published to crates.io, and can be useful standalone (Stylo has actually been getting monthly releases for ~year but we never really publicised that).
- Ongoing releases on a monthly cadance are planned
[0]: https://docs.rs/servo/0.1.0-rc2/servo
[1]: https://github.com/slint-ui/slint/tree/master/examples/servo
[2]: https://docs.rs/stylo
[3]: https://docs.rs/webrender
Tangent, but Slint is a really cool project. Not being able to dynamically insert widgets from code was the only thing that turned me off of it for my use case.
Here's a vibe-coded "servo-shot" CLI tool which uses this crate to render an image of a web page: https://github.com/simonw/research/tree/main/servo-crate-exp...
Here's the image it generated: https://gist.github.com/simonw/c2cb4fcb15b0837bbc4540c3d398c...That's pretty cool. I'm guessing it would need some tweaking to handle things like cookies, or does it just need a pointer to the cookiejar? I'm not too familiar with servo,
This is super useful! I have immediate use for this.
Do you know if Servo is 100% Rust with no external system dependencies? (ie, can get away with rustls only?)
Can this do Javascript? (Edit: Rendering SPAs / Javascript-only UX would be useful.)
Edit 2: Can it do WebGL? Same rationale for ThreeJS-style apps and 3D renders. (This in particular is right up my use case's alley.)
It depends on stuff like SpiderMonkey so not pure Rust.
It should be able to render JavaScript but I've seen it throw bugs on simple pages, no doubt because my vibe-coded thing is crap not because Servo itself can't handle them.
I have been building/vibecoding a similar tool and unfortunately came to the conclusion that in practice, there are just too many features dependent on the full Chrome stack that it's just more pragmatic to use a real Chromium installation despite the file size. Performance/image generation speed is still fine, though.
In Rust, the chromiumoxide crate is a performant way to interface with it for screenshots: https://crates.io/crates/chromiumoxide
This should be the real benchmark of AI coding skills - how fast do we get safe/modern infrastructure/tooling that everyone agrees we need but nobody can fund the development.
If Anthropic wants marketing for Mythos without publishing it - show us servo contrib log or something like that. It aligns nicely with their fundamental infrastructure safety goals.
I'd trust that way more than x% increase on y bench.
Hire a core contributor on Servo or Rust, give him unlimited model access and let's see how far we get with each release.
We do not need vibe-coded critical infrastructure.
As I see it, the focus should not be about the coding, but about the testing, and particularly the security evaluation. Particularly for critical infrastructure, I would want us to have a testing approach that is so reliable that it wouldn't matter who/what wrote the code.
I dont think that will ever be possible.
At some point security becomes - the program does the thing the human wanted it to do but didn't realize they didn't actually want.
No amount of testing can fix logic bugs due to bad specification.
AI as advanced fuzz-testing is ridiculously helpful though - hardly any bug you can in this sort of advanced system is a specification logic bug. It's low-level security-based stuff, finding ways to DDOS a local process, or work around OS-level security restrictions, etc.
Well, yes, agreed - that is the essential domain complexity.
But my argument is that we can work to minimize the time we spend on verifying the code-level accidental complexity.
Sure, but that is what we've been doing since the early 2000s (e.g. aslr, read only stacks, static analysis, etc).
And we've had some succeses, but i wouldn't expect any game changing breakthroughs any time soon.
I disagree. Thorough testing provides some level of confidence that the code is correct, but there's immense value in having infrastructure which some people understand because they wrote it. No amount of process around your vibe slop can provide that.
That's just status quo, which isn't really holding up in the modern era IMO.
I'm sure we'll have vibed infrastructure and slow infrastructure, and one of them will burn down more frequently. Only time will tell who survives the onslaught and who gets dropped, but I personally won't be making any bets on slow infrastructure.
I somewhat agree, but even then would argue that the proper level at which this understanding should reside is at the architecture and data flow invariants levels, rather than the code itself. And these can actually be enforced quite well as tests against human-authored diagrammatical specs.
If you don't fully understand the code how do you know it implements your architecture exactly and without doing it in a way that has implications you hadn't thought of?
As a trivial example I just found a piece of irrelevant crap in some code I generated a couple of weeks ago. It worked in the simple cases which is why I never spotted it but would have had some weird effects in more complicated ones. It was my prompting that didn't explain well enough perhaps but how was I to know I failed without reading the code?
I disagree. The code itself matters too.
They're getting really good at proofs and theorems, right?
If you're trusting core contributors without AI I don't see why you wouldn't trust them with it.
Hiring a few core devs to work on it should be a rounding error to Anthropic and a huge flex if they are actually able to deliver.
I trust people to understand the code they write. I don't trust them to understand code they didn't write.
It's extremely tempting to write stuff and not bother to understand it similar to the way most of us don't decompile our binaries and look at the assembler when we write C/C++.
So, should I trust an LLM as much as a C compiler?
Well if the big players want to tell me their models are nearly AGI they need to put up or shut up. I don't want a stochastically downloaded C compiler. I want tech that improves something.
Unfortunately we're going to get it whether or not we need it.
Replicating Chromium as a benchmark? ;)
Replicating Rust would also be a good one. There are many Rust-adjacent languages that ought to exist and would greatly benefit mankind if they were created.
The true solution to this is to fund things that are important, especially when billion-dollar companies are making a fortune from them.
> show us servo contrib log or something like that
Servo may not be the best project for this experiment, as it has a strict no-AI contributions allowed policy.
Agreed. Which other software does society need badly?
So, since this is the top post on Hacker News, and the website's description is a bit too high level for me, what does Servo let me do? By "web technologies", does it mean "put a web browser inside your desktop app"?
It's an alternative browser engine, vis a vis Ladybird
Specifically, it's the browser engine that spun out of Mozilla's early efforts towards a rust-based browser, and is one of the motivating projects for the entire Rust ecosystem
Is there a table of implemented RFCs? Something similar to http://caniuse.com where we can see what HTML/JS/CSS standards and features are implemented? If it exists, I can't seem to find it. Closest thing seems to be "experimental features" page but its not quite detailed enough.
Oh, I forgot that https://arewebrowseryet.com/ exists for this too!
https://doc.servo.org/apis.html is auto-generated from WebUDL interfaces that exist in Servo. It's not great but better than nothing.
Closest is perhaps the web platform tests
https://servo.org/wpt/
Their bloghas monthly posts on changes https://servo.org/blog/
For those of you using a browser to generate PDFs, the Rust crate you should look into is Typst [1]. Regardless of your application language, you can use their CLI.
It takes some time to get used to their DSL to write PDFs, but nowadays with AI that shouldn't take too long.
[1] https://crates.io/crates/typst
I keep hearing about this one as a LaTeX alternative. I shall have to take a proper look.
> As you can see from the version number, this release is not a 1.0 release. In fact, we still haven’t finished discussing what 1.0 means for Servo
Wait, crate versions go up to 1.0?
EDIT: Sorry, while crate stability may be an interesting conversation, this isn't the place for it. But I can't delete this comment. Please downvote it. Mods feel free to delete or demote it.
The fundamental problem with Rust versioning is that 0.3.5 is compatible with 0.3.6, but not 0.4.0 or 1.0.0; when major version is 0, the minor takes the role of major and patch takes the role of minor. So packages iterate through 0.x versions, and eventually, they reach a version that's "stable".
If version 0.7 turned out to hit the right API and not require backward incompatible changes, releasing a version 1.0 would be as disruptive as a major version change to your users and communicate through version semantics that it is a breaking change.
Semver declares that version 0.x is for initial development where there is no stability guarantee at all. This is the right semantics for a versioning system, but Cargo doesn't follow this part of semver. Providing stability guarantees throughout the 0.x cycle inevitably results in projects getting stuck in 0.x.
This is one of my biggest gripes with Cargo. But Rust people seem to universally consider it a non-issue so I don't think it'll ever be fixed.
> The fundamental problem with Rust versioning is that 0.3.5 is compatible with 0.3.6, but not 0.4.0 or 1.0.0
That’s a feature of semver, not a bug :)
Long answer: You are right to notice that minor versions within a major release can introduce new APIs and changes but generally, should not break existing APIs until the next major release.
However, this rule only applies to libraries after they reach 1.0.0. Before 1.0.0, one shouldn’t expect any APIs to be frozen really.
No, it's explicitly not. Semver says:
> Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.
Cargo is explicitly breaking with Semver by considering 0.3.5 compatible with 0.3.6.
To go further, semver provides semantics and an ordering but it says nothing about version requirement syntax. The caret operator to describe a range of versions is not part of the spec. It was introduced by initial semver-aware package managers such as npm or gem. Cargo decided to default to the caret operator, but it's still the caret operator.
In practice, there's no real issue with using the first non-zero component to define the group of API-compatible releases and most package managers agree on the semantics.
Thank you.
Eventually this will get cleared up. I’m close than I’ve ever been to actually handling this, but it’s been 9 years already, so what’s another few months…
The standard library has a whole bunch of tools to let them test and evolve APIs with a required-opt in, but every single ecosystem package has to get it right first try because Cargo will silently forcibly update packages and those evolution tools aren't available to third party packages.
Such a stupid state of affairs.
Personally, I think the 0 major version is a bad idea. I hear the desire to not want to have to make guarantees about stability in the early stages of development and you don't want people depending on it. But hiding that behind "v0.x" doesn't change the fact that you are releasing versions and people are depending on it.
If you didn't want people to depend on your package (hence the word "dependency") then why release it? If your public interface changes, bump that major version number. What are you afraid of? People taking your project seriously?
0.x is not that you don't want people depending on it, you just don't want them to come and complain when you quickly introduce some breaking changes. The project is still in development, it might be stable enough for use in "real projects(tm)", but it might also still significantly change. It is up to the user to decide whether they are OK with this.
1.x communicates (to me at least) you are pretty happy with the current state of the package and don't see any considerable breaking changes in the future. When 2.x comes around, this is often after 1.x has been in use for a long time and people have raised some pain points that can only be addressed by breaking the API.
But people will complain, so ex falso quodlibet
If you are at the point that other people can use your software, then you should use v1. If you are not ready for v1, then you shouldn't be releasing to other people.
Because this comment, "The project is still in development, it might be stable enough for use in "real projects(tm)", but it might also still significantly change." That describes every project. Every project is always in development. Every project is stable until it isn't. And when it isn't, you bump the major number.
I think we can come up with a reason why bumping the version number each breaking change isn't an elegant solution either: You would end up with version numbers in the hundreds or thousands.
Versioning is communication. I find it useful to communicate, through using version 0.x, "this is not a production ready library and it may change at any time, I provide no stability guarantees". Why might I release it in that state? Because it might still be useful to people, and people who find it useful may become contributors.
Any project may change at any time. That's why they bump from v1 to v2. But by not using the full precision of the version number, you're not able to communicate as clearly about releases. A minor release may not be 100% compatible with the previous version, but people still expect some degree of similarity such that migrating is not a difficult task. But going from v0.n to v0.(n+1) uses that field to communicate "hell, anything could happen, YOLO."
Nobody cares that Chrome's major version is 147.
By releasing a library with version 1.0, I communicate: "I consider this project to be in a state where it is reasonable to depend on it".
By releasing a library with version 0.x, I communicate: "I consider this project to be under initial development and would advice people not to depend on in unless you want to participate in its initial development".
I don't understand why people find this difficult or controversial.
Hey - Many rust libraries adopt [0-based versioning](https://0ver.org/). That link can describe it more elegantly than I.
If you want to lure Microslop to migrate all their "great" apps to Servo.
Easy, just add bloat code so it will use 5GB of RAM by default, that's instant adoption by MS.
I was a little curious to see if there was any Tauri integration, and it looks like there is (tauri-runtime-verso) ... Not sure where that comes out size-wise compared to Electron at that point though. My main desire there would be for Linux/flathub distribution of an app I've been working on.
What this crate could be used for?
when servo is ready i have plans to swap it into qutebrowser which ive been growing fonder of
Is Servo production-ready enough to replace or embed alongside engines like WebKit or Blink?
It depends on your use case. I wouldn't use it for a JS-heavy site. But if you have simple static content, it's probably enough. It's worth testing it out as a standalone app before integrating it as a library.
It doesn't crash as often as it used to few years ago. JS heavy sites might not work, and layout issues too. And internet gatekeepers cloudflare turnstile doesn't work.
why did it crash? Rust is supposed to be memory safe?..
we've come full circle. they've invented rust to do servo with it.
feels like we're actually getting new browser engines this decade and it's kind of strange
Servo has been on-the-go for a while though. It hasn't been a lightning speed development, it's just getting a bit more visible.
Sounds great, would use the crate from now on. its more convenient that way
Did firefox drop servo? I recalled they where in the progress of "rewrite in rust"?
Firefox incorporated parts of the Servo effort which were able to reach maturity. Stylo (Firefox's current CSS engine) and Webrender (the rendering engine) and a few other small components came from the Servo project.
Most other parts of Servo were not mature enough to integrate at the time Mozilla decided to end support for the project and didn't look like they would be mature enough any time soon. The DOM engine for example was in the early stages of being completely rewritten at the time because the original version had an architecture that made supporting the entire breadth of web standards challenging.
Keep in mind that you can continue adding Rust to Firefox without replacing whole components. It's not like Mozilla abandoned the idea of using more Rust in Firefox just because they stopped trying to rewrite whole components from the ground up.
Yes, during the layoff of August 2020
Mozilla laid off the full Servo team, but never publicly announced this afaik. Wikipedia includes it here: https://en.wikipedia.org/wiki/Firefox#cite_ref-120
Mozilla can't help it but be their own worst enemy. Ladybird may well never have happened if Mozilla just had kept working on Servo, and Ladybird is most definitely going to out compete Firefox when it reaches maturity, as Mozilla keeps on burning bridges with open source enthusiasts.
The problem with Mozilla is not just technical but cultural. The organization has been infected with managers. The managers want to keep their jobs more than they want Firefox to succeed. Clearly the solution is for the managers to fire themselves and allow the developers to run the show, but that was not going to happen.
Ladybird, by contrast, is a developer-lead open source project that has no such constraints. They also don't have a product yet but I'm sure the picture will be radically different in a few years.
Conway's law in action.
To add to the other replies, Firefox was explicitly never going to consume all of Servo. It was always meant to be a test bed project where sub-projects could be migrated to Firefox. I suspect that the long term intent might have been for Servo to get to a point where it could become Firefox, but that wasn't the stated plan.
I think they implemented parts of it into their Gecko engine. But they laid of all the Servo development team in like 2020 I believe.
Only recently when it moved over to the Linux Foundation has Servo started being worked on again
It's a great move. The early development of Rust aimed to support Servo. However, it's still disappointing that the script engine uses SpiderMonkey, which is purely C++.
It's best not to try and eat the elephant in one bite, which is perhaps where this project went wrong initially. Maybe this is a symptom of learning from past mistakes rather than a flaw.
My understanding is that the original intent of Servo was to be a way to develop features and port them over to Firefox itself (which did happen with at least a few features), and the relatively slower pace of developer is more due to Mozilla laying off everyone who was working on it. (Yes, presumably many of the same people are involved, but I would expect that being able to work on something full time without needing another source of income will end up making progress faster than needing to find time outside of work and balance between other things in life, ideally in a way that avoids burnout).
My understanding was that from day one the desire was to make a complete "web rendering & layout engine" and only pivoted to shipping smaller sub-components like Stylo (stylesheets) when it appeared to be "taking too long." I followed the project from the early days through the layoffs, but I may be misremembering things.
Interesting, it's certainly possible I was never aware of the super early days.
There are what, 5+ rust javascript engines that claim to be production-ready? Bolting one of those on in place of spider monkey seems like a reasonable future direction
What do you mean by "production ready" here exactly? In a web browser context, the JS engine is expected to have a high performance optimising JIT compiler. Do the existing Rust JS engines have that?
There's something to be said for the security benefits of not having a JIT though. Especially if you've used Rust for the engine you should have pretty solid security.
Yeah, having a code section that is writable and executable is a huge no-no from a security standpoint. JIT is a fundamentally insecure concept, just in general. By definition it's trading security for speed.
I honestly don't know, but they do say "production ready" on their marketing pages, so...
For an example of what I mean, see JetCrab: https://jetcrab.com
This doesn't implement a JS engine, it's just a wrapper around boa.
That page says:
> Complete JavaScript execution pipeline from source code parsing to bytecode execution.
So it's a bytecode interpreter, not a JIT.
It might still be production ready for a bunch of use cases. I may use it as a scripting layer for some pluggable piece of software or a game. I wouldn't consider it appropriate for a "production ready web browser" which intends to compete with Firefox and Chrome.
EDIT: Also for some reason all its components are called v8_something? That's pretty off putting, you can't just take another project's name like that.. and from the author's Reddit comments it seems to be mostly AI slop anyway. I'm guessing Claude wrote the "production ready" part on the website, I wouldn't trust it.
They may be production-ready in some sense but they're not ready to be put in Firefox, and/or they are v8 bindings.
They're all more than 10x slower than SpiderMonkey.
I mean SpiderMonkey works, and presumably is fairly self-contained, so I can see why replacing that isn't attractive unless you believe you can make it significantly better in some way.
Too little too late now that the new meta is to use system provided webviews so you don't have to ship a big ass web renderer per app.
System web views were available as drag and drop components in VB6 two and a half decades ago. There's nothing "new" about that as a concept, and plenty of reasons to not want to use Blink/WebKit.
> System web views were available as drag and drop components in VB6 two and a half decades ago. There's nothing "new" about that as a concept
We are in a thread discussing a Rust library, logically, I was referring to the current approach in GUI rendering in the Rust space (such as Tauri and Dioxus).
> and plenty of reasons to not want to use Blink/WebKit.
Such as? Can you name a few objective reasons against Blink/WebKit (the technology) that does not involve just not liking Google/Apple?
Tauri/Dioxus aren't necessarily the end state of Rust GUI
No particular reason Servo couldn't one day become the system web view on Linux distros...
Linux (GNU/Linux or whatever) doesn’t even have the concept of a system web view. The closest you might get to the notion is probably WebKitGTK which is perhaps the GNOME idea of a system web view, but it’s nothing like WebKit on macOS or WebView2 (or MSHTML in the past) on Windows for popularity or availability.
As a user of a desktop environment other than gnome-shell, I only have webkitgtk-6.0 installed because I chose to install Epiphany—it’s a good proxy for testing on Safari, which Apple makes ridiculously expensive.
Yeah the closest thing you come today is arguably WebKitGTK, which is known for being not exactly great.
That is not the meta. The meta is to ship blink so you only have to support a single version of a single web engine in stead of many versions of many different web engines.