Can someone explain to me what the difference really is between WASM and older tech like Java Applets, ActiveX, Silverlight and Macromedia Flash, because they don’t really sound much different to me. Maybe I’m just old, but I thought we’d learnt our lesson on running untrusted third party compiled code in a web browser. In all of these cases it’s pitched as improving the customer experience but also conveniently pushes the computational cost from server to client.
vbezhenar 42 days ago [-]
Java and Flash failed to deliver its promise of unbreakable sandbox where one could run anything without risking compromising host. They tried, but their implementations were ridden with vulnerabilities and eventually browsers made them unusable. Other mentioned technologies didn't even promise that, I think.
JavaScript did deliver its promise of unbreakable sandbox and nowadays browser runs JavaScript, downloaded from any domain without asking user whether he trusts it or not.
WASM builds on JavaScript engine, delivering similar security guarantees.
So there's no fundamental difference between WASM and JVM bytecode. There's only practical difference: WASM proved to be secure and JVM did not.
So now Google Chrome is secure enough for billions of people to safely run evil WASM without compromising their phones, and you can copy this engine from Google Chrome to server and use this strong sandbox to run scripts from various users, which could share resources.
An alternative is to use virtualization. So you can either compile your code to WASM blob and run it in the big WASM server, or you can compile your code to amd64 binary, put it along stripped Linux kernel and run this thing in the VM. There's no clear winner here, I think, for now, there are pros and cons for every approach.
jasode 42 days ago [-]
>So there's no fundamental difference between WASM and JVM bytecode. There's only practical difference: WASM proved to be secure and JVM did not.
There's more to it than just the sandbox security model. The JVM bytecode doesn't have pointers which has significant performance ramifications for any language with native pointers. This limitation was one of the reasons why the JVM was never a serious compilation target platform for low-level languages like C/C++.
E.g. Adobe compiled their Photoshop C++ code to WASM but not to the JVM to run in a Java JRE nor the Java web applet. Sure, one can twist a Java byte array to act as a flat address space and then "emulate" pointers to C/C++ but this extra layer of indirection which reduces performance wasn't something software companies with C/C++ codebases were interested in. Even though the JVM was advertised as "WORA Write-Once-Run-Anywhere", commercial software companies never deployed their C/C++ apps to the JVM.
So the WASM-vs-JVM story can't be simplified to "just security" or "just politics". There were actual different technical choices made in the WASM bytecode architecture to enable lower-level languages like C/C++. That's not to say the Sun Java team's technical choices for the JVM bytecode were "wrong"; they just used different assumptions for a different world.
adamc 42 days ago [-]
Also, the start-up time for the JVM made running applets very sluggish. Java quickly became a synonym for "slow".
kaba0 42 days ago [-]
You can’t just compare across decades of software and hardware development. Even downloading native binaries would have been sluggish, as the download would have been slow with those download speeds.
fastball 42 days ago [-]
Isn't the cold-start for the JVM still relatively slow, even in [current year]?
EDIT: seems like yes[1], at least where AWS Lambda is concerned.
I have a couple Quarkus apps that I've run in Lambdas that start in about a second. This is without using GraalVM too! Good enough for what I was doing (taking a list of file names, finding them in an S3 bucket and zipping them into a single payload)
42 days ago [-]
adamc 42 days ago [-]
But web pages were not so sluggish, hence people chose them over using applets.
kaba0 42 days ago [-]
Web pages at the time could at most <blink>, its interactivity was extremely limited compared to what we have know. Meanwhile a java applet could include a full-blown IDE/CAD/what have you
adamc 42 days ago [-]
Well, web pages could submit forms, which was the main thing. I remember working on apps where we went with web pages because applets were too slow, regardless of the features we gave up. Images were generated on the back end instead, for example.
tromp 42 days ago [-]
Lack of 64-bit ints didn't help either...
DanielHB 42 days ago [-]
> WASM proved to be secure and JVM did not.
It is interesting to ask why that is the case, from my point of view the reason is that the JVM standard library is just too damn large. While WASM goes on a lower-level approach of just not having one.
To make WASM have the capabilities required the host (the agent running the WASM code) needs to provide them. For a lot of languages that means using WASI, moving most of the security concerns to the WASI implementation used.
But if you really want to create a secure environment you can just... not implement all of WASI. So a lambda function host environment can, for example, just not implement any filesystem WASI calls because a lambda has no business implementing filesystem stuff.
> An alternative is to use virtualization. So you can either compile your code to WASM blob and run it in the big WASM server, or you can compile your code to amd64 binary, put it along stripped Linux kernel and run this thing in the VM.
I think the first approach gives a lot more room for the host to create optimizations, to the point we could see hardware with custom instructions to make WASM faster. Or custom WASM runtimes heavily tied to the hardware they run on to make better JIT code.
I imagine a future where WASM is treated like LLVM IR
cogman10 42 days ago [-]
I'll just add one thing here, WASM's platform access is VERY small. There's almost no runtime associated with WASM and thus no guarantees to what WASM can access.
When you throw WASM into the browser, it's access to the outside world is granted by the javascript container that invokes it.
That's very different compared to how other browser extensions operated. The old browser extensions like the JVM or flash were literally the browser calling into a binary blob with full access to the whole platform.
That is why the WASM model is secure vs the JVM model. WASM simply can't interact with the system unless it is explicitly given access to the system from the host calling it. It is even more strictly sandboxed than the Javascript engine which is executing it.
Vilian 41 days ago [-]
Wht can't wasm have it own invoker, instead of relying on javascript?
kaba0 42 days ago [-]
> I think the first approach gives a lot more room for the host to create optimizations, to the point we could see hardware with custom instructions to make WASM faster
Heh, there were literally CPUs with some support for the JVM! But it turns out that “translating” between different forms is not that expensive (and can be done ahead of time and cached), given that CPUs already use a higher level abstraction of x86/arm to “communicate with us”, while they do something else in the form of microcode. So it didn’t really pay off, and I would wager it wouldn’t pay off with WASM either.
mshockwave 42 days ago [-]
> Heh, there were literally CPUs with some support for the JVM!
Jazelle, a dark history that ARM never wants to mention again
perching_aix 42 days ago [-]
> JavaScript did deliver its promise of unbreakable sandbox
Aren't its VM implementations routinely exploited? Ranging from "mere" security feature exploits, such as popunders, all the way to full on proper VM escapes?
Like even in current day, JS is ran interpreted on a number of platforms, because JIT compiling is not trustworthy enough. And I'm pretty sure the interpreters are no immune either.
esrauch 42 days ago [-]
I think "routinely" is overstating it, billions people are running arbitrary JS on a daily basis and no meaningful number of them are being infected by malware.
Browser surface attracts the most intense security researcher scrutiny so they do find really wild chains of like 5 exploits that could possibly zero day, but it more reflects just how much scrutiny it has for hardening, realistically anything else will be more exploitable than that, eg your Chromecast playing arbitrarily video streams must he more exploitable than JS on a fully patched Chrome.
mmis1000 42 days ago [-]
Both chrome and firefox lock down the javascript that site is running into their own box. By using a standalone process and whatever mechanism system provided. A pwned site alone isn't enough to cause damage. You also need to overcome other layer of defenses (unlike something like flash that can be owned from it's script engine alone)
It usually require multi 0 day to overcome all those defense and do anything useful. (And it is also the highest glory in defcon)
The browser is surely frequently attacked due to the high rewards. But it also get patched really fast. (As long as you are not using a browser from 10 years ago).
tightbookkeeper 42 days ago [-]
Flash/applets could have been isolated in a process too, right?
nox101 42 days ago [-]
yes but no, because they needed access to the OS for various services, all of which would have had to be isolated from
the user code. Sun and Adobe woiod never have done this. Chrome did it, Safari and Firefox followed. WASM runs in that environment. Flash/applets ran outside of that environment. they did that precisely to provide services the broswer didn't back then.
01HNNWZ0MV43FF 41 days ago [-]
Chrome did put a sandbox around Flash, didn't they? I thought the bigger reasons it died out was that it didn't integrate with DOM and Apple hated it
mdhb 42 days ago [-]
There were a bunch of things missing from OPs description around the security considerations of Wasm but it has a lot of other stuff on top of what the browser provides when it’s executing JavaScript.
The primary one is its idea of a “capability model” where it basically can’t do any kinds of risky actions (I.e touch the outside world via the network or the file system for example) unless you give it explicit permissions to do so.
Beyond that it has things like memory isolation etc so even an exploit in one module can’t impact another and each module has its own operating environment and permission scope associated with it.
emporas 42 days ago [-]
I was surprised when google has agreed to implement the capabilities model for Chrome. I would guess that asking the user for permission to access the microphone would not sit well with google. In smartphones they own the OS so they can ignore wasm's security model as much as they like.
mdhb 42 days ago [-]
I feel there’s a bit of a disconnect here between Google’s Ads division who are looking to basically do the bare minimum to avoid getting repeatedly spanked primarily by the EU but also now with talk of a breakup in the US and most other parts of Google who I say this entirely unironically are by far the best of all major options with regards to security in both the browser and their public cloud offerings. I’d even extend that possibly to operating systems as well. ChromeOS is miles in front of anything else out there currently but on mobile Android has historically lagged behind iOS although that gap is close to indistinguishable in 2024.
maeil 40 days ago [-]
> on mobile Android has historically lagged behind iOS although that gap is close to indistinguishable in 2024.
This is true, but unfortunately in the negative sense: both are as insecure as each other, i.e. pwned. [1]
It is not my intention to be contrarian, but honestly this might be the most incorrect comment I've ever read on hacker news, in several different ways. Sure, some of these might be subjective, but for example chromeOS is Linux with a shiny coat in top, how could it be any better than, well, Linux, let alone miles ahead?
ewoodrich 42 days ago [-]
ChromeOS uses the Linux kernel but unless you enable developer mode (which has multiple levels of scary warnings including on every boot and requires completely wiping the device to enable) everything runs in the Chrome web sandbox or the Android VM.
A ChromeOS user isn't apt-get installing binaries or copy/pasting bash one liners from Github. If you enable the Linux dev environment, that also runs in an isolated VM with a much more limited attack surface vs say an out of the box Ubuntu install. Both the Android VM and Linux VM can and routinely are blocked by MDM in school or work contexts.
You could lock down a Linux install with SELinux policies and various other restrictions but on ChromeOS it's the default mode that 99% of users are protected by (or limited by depending on your perspective).
mdhb 42 days ago [-]
Even when you enable “developer mode” which is essentially Debian in a VM the level of care that went into making sure that no matter what happens there you will never suffer a full system compromise is truly impressive.
To give you a sense of where they were half a decade ago you can already see that it’s as I described miles in front of anything that exists even today in this video: https://youtu.be/pRlh8LX4kQI
When we get to talking about when they went for a total ground up first principles approach with Fuchsia as a next generation operating system that is something else entirely on a different level again.
I genuinely didn’t have a hint of irony in my original comment. They are actually that much better when it comes to security.
42 days ago [-]
silvestrov 42 days ago [-]
Most of all the problem with Java Applets was that they were very slow to load and required so many resources that the computer came to a halt.
They also took much longer to develop than whatever you could cook up in plain html and javascript.
kaba0 42 days ago [-]
Funnily enough, wasm also has the problem of “slow to load”. In that vein, a higher level bytecode would probably result in smaller files to transport. And before someone adds, the JVM also supports loading stuff in a streaming way - one just has to write a streaming class loader, and then the app can start immediately and later on load additional classes.
42 days ago [-]
gnz11 42 days ago [-]
Too be fair, they were slow to load if you didn’t have the browser extension and correct JRE installed.
kaba0 42 days ago [-]
I would add that most of it was politics.
The JVM is not fundamentally insecure the same say as neither is any Turing-complete abstraction like an x86 emulator or so. It’s always the attached APIs that open up new attack surfaces. Since the JVM at the time was used to bring absolutely unimaginable features to the otherwise anemic web, it had to be unsafe to be useful.
Since then, the web improved a huge amount, like a complete online FPS game can literally be programmed in just JS almost a decade ago. If a new VM can just interact with this newfound JS ecosystem and rely on these to be the boundaries it can of couse be made much safer. But it’s not inherently due to this other VM.
norswap 42 days ago [-]
> WASM proved to be secure and JVM did not.
This is an oversimplification — there's nothing about the JVM bytecode architecture making it insecure. In fact, it is quite simpler as an architecture than WASM.
Applets were just too early (you have to remember what the state of tech looked like back then), and the implementation was of poor quality to boot (owing in part to some technical limitations — but not only).
But worst of all, it just felt jank. It wasn't really part of the page, just a little box in it, that had no connection to HTML, the address bar & page history, or really anything else.
The Javascript model rightfully proved superior, but there was no way Sun could have achieved it short of building their own browser with native JVM integration.
Today that looks easy, just fork Chromium. But back then the landscape was Internet Explorer 6 vs the very marginal Mozilla (and later Mozilla Firefox) and proprietary Opera that occasionally proved incompatible with major websites.
skybrian 42 days ago [-]
Yes it’s true that there’s more to the story, but also, Java really is more complicated and harder to secure than WASM. You need to look at the entire attack surface and not just the bytecode.
For example, Java was the first mainstream language with built-in threading and that resulted in a pile of concurrency bugs. Porting Java to a new platform was not easy because it often required fixing threading bugs in the OS. By contrast, JavaScript and WASM (in the first version) are single-threaded. For JavaScript it was because it was written in a week, but for WASM, they knew from experience to put off threading to keep things simple.
Java also has a class loader, a security manager that few people understand and sensitive native methods that relied on stack-walking to make sure they weren’t called in the wrong place. The API at the security boundary was not well-designed.
A lot of this is from being first at a lot of things and being wildly ambitious without sufficent review, and then having questionable decisions locked in by backward compatibility concerns.
eduction 42 days ago [-]
> back then the landscape was Internet Explorer 6 vs the very marginal Mozilla
Your timeline is off by about five years. Java support shipped with Netscape Navigator 2 in 1995, and 95/96/97 is when Java hype and applet experimentation peaked.
By the time Mozilla spun up with open sourced Netscape code, Java in the browser was very much dead.
You nailed the other stuff though.
(Kind of an academic point but I’m curious if Java browser/page integration was much worse than JavaScript in those days. Back then JS wasn’t very capable itself and Netscape was clearly willing to work to promote Java, to the point of mutilating and renaming the language that became JavaScript. I’m not sure back then there was even the term or concept of DOM, and certainly no AJAX. It may be a case of JavaScript just evolving a lot more because applets were so jank as to be DOA)
empthought 40 days ago [-]
ActiveX and Macromedia Flash were also popular alternatives to Java applets. Until v8 and Nitro were available, browser-based JavaScript was not a credible option for many apps.
foobarian 42 days ago [-]
> There's only practical difference: WASM proved to be secure and JVM did not.
The practical reasons have more to do with how the JVM was embedded in browsers than the actual technology itself (though Flash was worse in this regard). They were linked at binary level and had same privileges as the containing process. With the JS VM the browser has a lot more control over I/O since the integration evolved this way from the start.
EasyMark 42 days ago [-]
What would you say is the performance difference between say running a qt app as native compiled vs running it in WASM? I’ve always been curious but never tried. I know it would vary based on the application but I’m guessing something that is maybe calculating some Monte Carlo model and then displaying the result or something else along those lines that actually will max out the CPU at times rather than be waiting on human interaction 99%of the time.
Dwedit 42 days ago [-]
> JavaScript did deliver its promise of unbreakable sandbox
I'm sure there's a big long list of WebKit exploits somewhere that will contradict that sentence...
BobbyTables2 42 days ago [-]
JavaScript is all fun and games until a type confusion bug in V8 allows arbitrary code execution from a simple piece of JavaScript code…
abound 42 days ago [-]
Sure, and if you find one of those, you can trade it in for $25k or more [1]
Unlike ActiveX, Silverlight, or Flash, it's an open standard developed by a whole bunch of industry players, and it has multiple different implementations (where Java sits on that spectrum is perhaps a bit fuzzier). That alone puts it heads and shoulders above any of the alternatives.
Unlike the JVM, WASM offers linear memory, and no GC by default, which makes it a much better compilation target for a broader range of languages (most common being C and C++ through Emscripten, and Rust).
> Maybe I’m just old, but I thought we’d learnt our lesson on running untrusted third party compiled code in a web browser.
WASM is bytecode, and I think most implementations share a lot of their runtime with the host JavaScript engine.
> In all of these cases it’s pitched as improving the customer experience but also conveniently pushes the computational cost from server to client.
The whole industry has swung from fat clients to thin clients and back since time immemorial. The pendulum will keep swinging after this too.
DougMerritt 42 days ago [-]
> The whole industry has swung from fat clients to thin clients and back since time immemorial. The pendulum will keep swinging after this too.
Indeed, graphics pioneer and all-around-genius Ivan Sutherland observed (and named) this back in 1968:
"wheel of reincarnation
"[coined in a paper by T.H. Myer and I.E. Sutherland On the Design of Display Processors, Comm. ACM, Vol. 11, no. 6, June 1968)] Term used to refer to a well-known effect whereby function in a computing system family is migrated out to special-purpose peripheral hardware for speed, then the peripheral evolves toward more computing power as it does its job, then somebody notices that it is inefficient to support two asymmetrical processors in the architecture and folds the function back into the main CPU, at which point the cycle begins again.
"Several iterations of this cycle have been observed in graphics-processor design, and at least one or two in communications and floating-point processors. Also known as the Wheel of Life, the Wheel of Samsara, and other variations of the basic Hindu/Buddhist theological idea. See also blitter."
That was why i stopped using the word 'tech' to refer to these things. You don't suddenly go back to stop using the wheel after a time, or suddenly think that printing press was a bad idea after all. Those are techs. Many of the things we call techs nowadays are just paradigms. And frameworks are defnitely not 'new technology'.
artikae 42 days ago [-]
All it takes for something to be replaced is something that does the job better. You can only really apply your definition in hindsight, after something has stood the test of time. You can't tell the difference between sails and wheels until after the rise of the steam engine.
wolvesechoes 42 days ago [-]
> Many of the things we call techs nowadays are just paradigms
More like fads sold to milk even more money from people.
WasmGC is there no matter what, unless we are talking about an incomplete implementation, also plenty of linear memory based bytecodes since 1958.
pdpi 42 days ago [-]
WasmGC is a feature you can opt in to, rather than a core feature of the platform. It's more of an enabler for languages that expect a GC from their host platform (for things like Dart and Kotlin). Inversely, other forms of bytecode might have linear memory, but the JVM isn't one of those.
For the purposes of OP's question, the memory model difference is one of the key reasons why you might want to use wasm instead of a java applet.
pjmlp 42 days ago [-]
JVM is one bytecode among many since 1958, no need to keep bashing against it as way to champion WASM.
Opt-in or not, it is there on the runtime.
swsieber 42 days ago [-]
It seems relevant since we are in a thread asking to compare WASM to java applets.
Laremere 42 days ago [-]
Wasm has a great benefits over those technologies:
- Wasm has verification specification that wasm bytecode must comply to. This verified subset makes security exploits seen in those older technologies outright impossible. Attacks based around misbehaving hardware like heartbleed or rowhammer might still be possible, but you, eg, can't reference memory outside of your wasm's memory by tricking the VM to interpret a number you have as a pointer to memory that doesn't belong to you.
- Wasm bytecode is trivial (as it gets) to turn into machine code. So implementations can be smaller and faster than using a VM.
- Wasm isn't owned by a specific company, and has an open and well written specification anyone can use.
- It has been adopted as a web standard, so no browser extensions are required.
As for computation on clients versus serves, that's already true for Javascript. More true in fact, since wasm code can be efficient in ways that are impossible for Javascript.
kgeist 42 days ago [-]
Btw, is WASM really more secure? JVM and .NET basically have capability-based security thanks to their OOP design together with bytecode verification: if you can't take a reference to an object (say, there's a factory method with a check), you can't access that object in any way (a reference is like an access token).
As far as I understand, in WASM memory is a linear blob, so if I compile C++ to WASM, isn't it possible to reference a random segment of memory (say, via an unchecked array index exploit) and then do whatever you want with it (exploit other bugs in the original C++ app). The only benefit is that access to the OS is isolated, but all the other exploits are still possible (and impossible in JVM/.NET).
„We find that many classic vulnerabilities which, due to common mitigations, are no longer exploitable in native binaries, are completely exposed in WebAssembly. Moreover, WebAssembly enables unique attacks, such as overwriting supposedly constant data or manipulating the heap using a stack overflow.”
My understanding is that people talking about wasm being more secure mostly talk about the ability to escape the sandbox or access unintended APIs, not integrity of the app itself.
lifthrasiir 42 days ago [-]
For now, (typical) WASM is indeed more secure than (typical) JVM or .NET bytecodes primarily because external operations with WASM are not yet popular. WASM in this regard has the benefit of decades' worth of hindsight that it can carve its own safe API for interoperation, but otherwise not technically superior or inferior. Given that the current web browser somehow continues to ship and keep such APIs, I think the future WASM with such APIs is also likely to remain safer, but that's by no means guaranteed.
igrunert 42 days ago [-]
When discussing security it's important to keep in mind the threat model.
We're mostly concerned with being able to visit a malicious site, and execute wasm from that site without that wasm being able to execute arbitrary code on the host - breaking out of the sandbox in order to execute malware. You say the only benefit is that access to the OS is isolated, but that's the big benefit.
Having said that, WebAssembly has some design decisions that make your exploits significantly more difficult in practice. The call stack is a separate stack from WebAssembly memory that's effectively invisible to the running WebAssembly program, so return oriented programming exploits should be impossible. Also WebAssembly executable bytecode is separate from WebAssembly memory, making it impossible to inject bytecode via a buffer overflow + execute it.
The downside of WASM programs not being able to see the call stack is that it makes it impossible to port software that uses stackful coroutines/fibers/whatever you want to call them to WASM, since that functionality works by switching stacks within the same thread.
nox101 42 days ago [-]
yes you're missing something. Java applets and flash outside of any security and they ran the users code in that insecure environment
WASM, in broswers, runs entirely inside a secure environment with no access to the system.
js->browser->os
|
+--Flash/java-->os
vs
wasm->browser->os
further. WASM and Js are in their own process with no os acesss. they can't access the os except by rpc to the broswer
flash/java tho, ran all user code in the same process with full access to the os
kaba0 42 days ago [-]
Seems like a trivial thing to fix though, it was a lack of will over an explicit design tradeoff. At Applet’s time there was simply no such API surface to attach to and make useful programs.
nox101 40 days ago [-]
it's not a trivial thing to fix. It took apple, Mozilla, and google years to refsctor their broswers to isolate user code in its own process and then effiently ipc all services to other processes.
chrome started with that but also started without GPU based graphics and spent 2-3 years adding yet another process make it possible. mozilla and safari took almost 10 years to catch up.
kgeist 42 days ago [-]
>Wasm has verification specification. This verified subset makes security exploits seen in those older technologies outright impossible
Both Java and .NET verify their bytecode.
>Wasm bytecode is trivial (as it gets) to turn into machine code
JVM and .NET bytecodes aren't supercomplicated either.
Probably the only real differences are: 1) WASM was designed to be more modular and slimmer from the start, while Java and .NET were designed to be fat; currently there are modularization efforts, but it's too late 2) WASM is an open standard from the start and so browser vendors implement it without plugins
Other than that, it feels like WASM is a reinvention of what already existed before.
flohofwoe 42 days ago [-]
AFAIK the big new thing in WASM is that it enforces 'structured control flow' - so it's a bit more like a high level AST than an assembly-style virtual ISA. Not sure how much of that matters in practice, but AFAIK that was the one important feature that enabled the proper validation of WASM bytecode.
iainmerrick 42 days ago [-]
I don't think there's any significant advance in the bytecode beyond e.g. JVM bytecode.
The difference is in the surface area of the standard library -- Java applets exposed a lot of stuff that turned out to have a lot of security holes, and it was basically impossible to guarantee there weren't further holes. In WASM, the linear memory and very simple OS interface makes the sandboxing much more tractable.
titzer 42 days ago [-]
I worked on JVM bytecode for a significant number of years before working on Wasm. JVM bytecode verification is non-trivial, not only to specify, but to implement efficiently. In Java 6 the class file format introduced stack maps to tame a worst-case O(n^3) bytecode verification overhead, which had become a DoS attack vector. Structured control flow makes Wasm validation effectively linear and vastly simpler to understand and vet. Wasm cleaned up a number of JVM bytecode issues, such as massive redundancy between class files (duplicate constant pool entries), length limitations (Wasm uses LEBs everywhere), typing of locals, more arithmetic instructions, with signedness and floating point that closer matches hardware, addition of SIMD, explicit tail calls, and now first-class functions and a lower-level object model.
kaba0 42 days ago [-]
Are they validating code to the same degree though? Like, there are obviously learned lessons in how WASM is designed, but at the same time JVM byte code being at a slightly higher level of abstraction can outright make certain incorrect code impossible to express, so it may not be apples to oranges.
What I’m thinking of is simply memory corruption issues from the linear memory model, and while these can only corrupt the given process, not anything outside, it is still not something the JVM allows.
titzer 42 days ago [-]
Wasm bytecode verification is more strict than JVM bytecode verification. For example, JVM locals don't have declared types, they are inferred by the abstract interpretation algorithm (one of the reasons for the afore-mentioned O(n^3) worst case). In Wasm bytecode, all locals have declared types.
Wasm GC also introduces non-null reference types, and the validation algorithm guarantees that locals of declared non-null type cannot be used before being initialized. That's also done as part of the single-pass verification.
Wasm GC has a lower-level object model and type system than the JVM (basically structs, arrays, and first-class functions, to which object models are lowered), so it's possible that a higher-level type system, when lowered to Wasm GC, may not be enforceable at the bytecode level. So you could, e.g. screw up the virtual dispatch sequence of a Java method call and end up with a Wasm runtime type error.
jeberle 42 days ago [-]
Thx for this perspective and info. Regarding "signedness and floating point that closer matches hardware", I'm not seeing unsigned integers. Are they supported? I see only:
> Two’s complement signed integers in 32 bits and optionally 64 bits.
Signed and unsigned are just different views on the same bits. CPU registers don't carry signedness either after all, the value they carry is neither signed nor unsigned until you look at the bits and decide to "view" them as a signed or unsigned number.
With the two's complement convention, the concept of 'signedness' only matters when a narrow integer value needs to be extended to a wider value (e.g. 8-bit to 16-bit), specifically whether the new bits needs to be replicated from the narrow value's topmost bit (for signed extension) or set to zero (for unsigned extension).
It would be interesting to speculate what a high level language would look like with such sign-agnostic "Schroedinger's integer types").
jeberle 42 days ago [-]
CPU instruction sets do account for signed vs unsigned integers. SHR vs SAR for example. It's part of the ISAs. I'm calling this out as AFAIK, the JVM has no support for unsigned ints and so that in turn makes WASM a little more compelling.
Yes some instructions do - but surprisingly few (for instance there's signed/unsigned mul/div instructions, but add/sub are 'sign-agnostic'). The important part is that any 'signedness' is associated with the operation, and not with the operands or results.
kaba0 42 days ago [-]
Well, it has compiler intrinsics for unsigned numbers, for what it’s worth.
Laremere 42 days ago [-]
Wasm makes no distinction between signed and unsigned integers as variables, only calling them integers. The relevant operations are split between signed and unsigned.
See how there's only i32.load and i32.eq, but there's i32.lt_u and i32.lt_s. Loading bits from memory or comparing them is the same operation bit for bit for each of signed and unsigned. However, less than requires knowing the desired signess, and is split between signed and unsigned.
iainmerrick 42 days ago [-]
I stand corrected! That’s great information, thanks. I didn’t know JVM bytecode had so many problems.
tptacek 42 days ago [-]
Java Applets and ActiveX had less-mediated (Applets, somewhat; ActiveX, not at all) access to the underlying OS. The "outer platform" of WASM is approximately the Javascript runtime; the "outer platform" of Applets is execve(2).
pajamaboin 42 days ago [-]
This article is about WASM on the server so to answer your question it's different because it's not pushing computational cost from the server to the client. It can, but it doesn't in all cases. That's a huge difference. Others have already commented others (better sandboxing, isolation, etc)
ranger_danger 42 days ago [-]
It's amazing how many people don't actually read the article and just start commenting right away. It's like leaving bad amazon reviews for products you haven't purchased.
flohofwoe 42 days ago [-]
> untrusted third party compiled code in a web browser.
WASM makes that safe, and that's the whole point. It doesn't increase the attack surface by much compared to running Javascript code in the browser, while the alternative solutions where directly poking through into the operating system and bypassing any security infrastructure of the browser for running untrusted code.
BiteCode_dev 42 days ago [-]
WASM is a child of the browser community and built on top of existing infra.
Java was an outsider trying to get in.
The difference is not in the nature of things, but rather who championed it.
tsimionescu 42 days ago [-]
Pushing compute to the client is the whole point, and is often a major improvement for the end user, especially in the era in which phones are faster than the supercomputers of the 90s.
And otherwise, WASM is different in two ways.
For one, browsers have gotten pretty good at running untrusted 3rd party code safely, which Flash or the JVM or IE or.NET were never even slightly adequate for.
The other difference is that WASM is designed to allow you to take a program in any language and run it in the user's browser. The techs you mention were all available for a single language, so if you already had a program in, say, Python, you'd have to re-write it in Java or C#, or maybe Scala or F#, to run it as an applet or Silverlight program.
pjmlp 42 days ago [-]
CLR means Common Language Runtime for a reason.
From 2001,
"More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET."
It's not the same thing though. All of these languages have specific constructs for integrating with the CLR, the CLR is not just a compilation target like WASM is. C++/CLR even has a fourth kind of variable compared to base C++ (^, managed references of a type, in addition to the base type, * pointers to the type, and & references to the type). IronPython has not had a GIL since its early days. I'm sure the others have significant differences, but I am less aware of them.
pjmlp 42 days ago [-]
As if WebAssembly doesn't impose similar restrictions, with specific kinds of toolchains, and now the whole components mess.
This WebAssembly marketing is incredible.
tsimionescu 42 days ago [-]
Are there any examples of how, say, C++ compiled for WASM is different from native C++, or Python on WASM vs CPython? I haven't really used or cared about WASM, so I'm happy to learn, I don't have some agenda here.
IshKebab 42 days ago [-]
ActiveX wasn't sandboxed so it was a security joke. Flash and Silverlight were full custom runtimes that a) only worked with a specific language, and b) didn't integrate well with the existing web platform. WASM fixes all of that.
tightbookkeeper 42 days ago [-]
But that’s missing a few steps. First they banned all those technologies saying JavaScript was sufficient, then only later made wasm.
There never was a wasm vs applet debate.
IshKebab 42 days ago [-]
Nobody banned Flash. Apple just sensibly didn't implement it, because it was shit on phones. Android did support Flash and the experience was awful.
tightbookkeeper 41 days ago [-]
They sure banned Java Applets.
> Nobody banned Flash.
What happened first? Chrome dropping support for flash, or flash stopped making updates?
bloppe 42 days ago [-]
WebAssembly has a few things that set it apart:
- The security model (touched on by other comments in this thread)
- The Component Model. This is probably the hardest part to wrap your head around, but it's pretty huge. It's based on a generalization of "libraries" (which export things to be consumed) to "worlds" (which can both export and import things from a "host"). Component modules are like a rich wrapper around the simpler core modules. Having this 2-layer architecture allows far more compilers to target WebAssembly (because core modules are more general than JVM classes), while also allowing modules compiled from different ecosystems to interoperate in sophisticated ways. It's deceivingly powerful yet also sounds deceivingly unimpressive at the same time.
- It's a W3C standard with a lot of browser buy-in.
- Some people really like the text format, because they think it makes Wasm modules "readable". I'm not sold on that part.
- Performance and the ISA design are much more advanced than JVM.
duped 42 days ago [-]
> This is probably the hardest part to wrap your head around, but it's pretty huge.
It's just an IDL, IDL's have been around a long time and have been used for COM, Java, .NET, etc.
dspillett 42 days ago [-]
> Can someone explain to me what the difference really is between WASM and older tech like Java Applets, ActiveX, Silverlight and Macromedia Flash
As well as the security model differences other are debating, and WASM being an open standard that is easy to implement and under no control from a commercial entity, there is a significant difference in scope.
WebAssemply is just the runtime that executes byte-code compiled code efficiently. That's it. No large standard run-time (compile in everything you need), no UI manipulation (message passing to JS is how you affect the DOM, and how you ready DOM status back), etc. It odes one thing (crunch numbers, essentially) and does it well.
palmfacehn 42 days ago [-]
There have also been exploits of Chrome's JS sandbox. For me the greatest difference is that WASM is supported by the browser itself. There isn't the same conflict of interest between OS vendors and 3rd party runtime providers.
SkiFire13 42 days ago [-]
The replacement for those technologies is arguably javascript. WASM is more focused on performance by providing less abstractions and an instruction set closer to assembly (hence the name).
The issue with those older technologies was that the runtime itself was a third-party external plugin you had to trust, and they often had various security issues. WASM however is an open standard, so browser manifacturers can directly implement it in browser engines without trusting other third-parties. It is also much more restricted in scope (less abstractions mean less work to optimize them!) which helps reducing the attack surface.
0x457 42 days ago [-]
> The replacement for those technologies is arguably javascript. WASM is more focused on performance by providing less abstractions and an instruction set closer to assembly (hence the name).
That is nonsense. WASM and JS have the exact same performance boundaries in a browser because the same VM runs them. However, WASM allows you to use languages where it's easier to stay on a "fast-path".
42 days ago [-]
mike_hearn 42 days ago [-]
Conceptually, they aren't that different. The details do matter though.
WASM on its own isn't anything special security-wise. You could modify Java to be as secure or actually more secure just by stripping out features, as the JVM is blocking some kinds of 'internal' security attacks that WASM only has mitigations for. There have been many sandbox escapes for WASM and will be more, for example this very trivial sandbox escape in Chrome:
... is somewhat reminiscent of sandbox escapes that were seen in Java and Flash.
But! There are some differences:
1. WASM / JS are minimalist and features get added slowly, only after the browser makers have done a lot of effort on sandboxing. The old assumption that operating system code was secure is mostly no longer held whereas in the Flash/applets/pre-Chrome era, it was. Stuff like the Speech XML exploit is fairly rare, whereas for other attempts they added a lot of features very fast and so there was more surface area for attacks.
2. There is the outer kernel sandbox if the inner sandbox fails. Java/Flash didn't have this option because Windows 9x didn't support kernel sandboxing, even Win2K/XP barely supported it.
3. WASM / JS doesn't assume any kind of code signing, it's pure sandbox all the way.
freetonik 42 days ago [-]
Not an answer, but I think it’s unfair to group Flash with the others because it was both the editor/compiler and the player were proprietary. I guess same applies to Silverlight at least.
Kwpolska 42 days ago [-]
The ActiveX "player" (Internet Explorer) was also proprietary. And I'm not sure if you could get away without proprietary Microsoft tools to develop for it.
afavour 42 days ago [-]
The big conceptual difference is that Flash, ActiveX etc allowed code to reach outside of the browser sandbox. WASM remains _inside_ the browser sandbox.
Also no corporate overlord control.
sebastianconcpt 42 days ago [-]
For starters, in that it gives you memory safe bytecodes computation that aren't coupled with one specific language.
Starlevel004 42 days ago [-]
You can't easily decompile WASM so it makes it harder to block inline ads.
afiori 42 days ago [-]
You can alreay compile javascript into https://jsfuck.com/ and you could also very easily recompile the wasm into js.
Obsuscation and transpilation are not new in jsland
tantalor 42 days ago [-]
> Amazon started the serverless age of compute with Lambda
Google App Engine (2008) predates Lambda (2014) by 6 years!
chubot 42 days ago [-]
Yeah also heroku and the whole generation of “PaaS”
I was never quite sure why we got the name “serverless”, or where it came from, since there were many such products a few years before, and they already had a name
App engine had both batch workers and web workers too, and Heroku did too
They were both pre-docker, and maybe that makes people think they were different? But I think lambda didn’t launch with docker either
randomdata 42 days ago [-]
> I was never quite sure why we got the name “serverless”, or where it came from
Serverless refers to the software not being a server (usually implied to be a HTTP server), as was the common way to expose a network application throughout the 2010s, instead using some other process-based means to see the application interface with an outside server implementation. Hence server-less.
It's not a new idea, of course. Good old CGI is serverless, but CGI defines a specific protocol whereas serverless refers to a broad category of various implementations.
bloppe 42 days ago [-]
Pedantry police here. I would define serverless to mean that all the hardware is completely abstracted away. For instance, on EC2, you have to pick an instance type. You pick how much memory and compute you need. On a managed kuberenetes cluster, you still have to think about nodes. On a serverless platform, though, you have no idea how many computers or what kinds of computers are actually running your code. It just runs when it needs to. Of course there's still an HTTP server somewhere, though.
So, you could run a CGI script on a serverless platform, or a "serverful" one. You could even run it locally.
Per wikipedia: "Serverless is a misnomer in the sense that servers are still used by cloud service providers to execute code for developers. However, developers of serverless applications are not concerned with capacity planning, configuration, management, maintenance, fault tolerance, or scaling of containers, virtual machines, or physical servers."
chubot 42 days ago [-]
FWIW I agree with you -- serverless does not refer to "web server", it refers to "linux server machine" (whether it's physical or virtual)
You don't care about the specific machine, the OS kernel, the distro, the web server, or SSL certificates when you're doing "serverless"
And the SAME was true of "PaaS"
This whole subthread just proves that the cloud is a mess -- nobody knows what "serverless" is or that App Engine / Heroku already had it in 2008 :)
randomdata 42 days ago [-]
> it refers to "linux server machine" (whether it's physical or virtual)
No, "server" most definitely refers to software that listens for network requests. Colloquially, hardware that runs such software is often also given the server moniker ("the computer running the server" is a mouthful), but that has no applicability within the realm of discussion here. If you put the user in front of that same computer with a keyboard and mouse controlling a GUI application, it would no longer be considered a server. We'd call it something like a desktop. It is the software that drives the terminology.
> nobody knows what "serverless" is or that App Engine / Heroku already had it in 2008 :)
Hell, we were doing serverless in the 90s. You uploaded your CGI script to the provider and everything else was their problem.
The difference back then was that everyone used CGI, and FastCGI later on, so we simply called it CGI. If you are old enough to recall, you'll remember many providers popped up advertising "CGI hosting". Nowadays it is a mishmash of proprietary technologies, so while technically no different than what we were doing with CGI back in the day, it isn't always built on literal CGI. Hence why serverless was introduced as a more broad term to capture the gamut of similar technologies.
chubot 42 days ago [-]
fly.io is "serverless", but there are HTTP servers inside your Docker container, so I don't agree -- in that case it refers to the lack of pinning to a physical machine
It would make for a much more interesting conversation if you cite some definitions/sources, as others have done here, rather than merely insisting that everyone thinks of the terms as you think of them
randomdata 41 days ago [-]
> fly.io is "serverless"
Right, with the quotes being theirs. Meaning even they recognize that it isn't serverless-proper, just a blatant attempt at gaining SEO attention in an effort to advertise their service. It is quite telling when an advertisement that explicitly states right in it it has nothing to do with serverless is the best you could come up with.
bloppe 42 days ago [-]
I agree the "serverless" is not a good name. But hey, it stuck :/
I also can't come up with one that's significantly better.
randomdata 42 days ago [-]
For all intents and purposes, when is the hardware not fully abstracted away? Even through the 2010s when running as a server was the norm, for the most part you could throw the same code onto basically any hardware without a second thought.
But pedantically, serverless is to be taken literally. It implies that there is no server in your application.
bloppe 42 days ago [-]
EC2 and managed kubernetes are two examples where you still have to think about hardware.
42 days ago [-]
randomdata 42 days ago [-]
Not really. The application doesn't care. Hell, many of these modern serverless frameworks are built so that they can run both server and serverless from the very same codebase, so it is likely you can take the same code built to run on someone's MacBook running macOS/ARM and run it on an EC2 instance running Linux/amd64 and then take it to a serverless provider on any arbitrary hardware without any code modification at all! I've been around the web since Perl was the de facto way to build web apps, and it has always been an exceptional situation to not have the hardware fully abstracted away. Typically, if it will run on one system, it will run on any system.
The move away from CGI/FastCGI/SCGI to the application being the server was a meaningful shift in how web applications were developed. Now that we've started adopting the server back out of the application in favour of the process-based model again, albeit now largely through propriety protocols instead of a standard like CGI, serverless has come into use in recognition of that. We don't want to go back to calling it CGI because CGI is no longer the protocol du jour.
conradev 42 days ago [-]
Serverless, to me, is purely about efficiency. One way to measure that is the time for a "cold start" or "going from a state where you pay no money to one where you pay money". These gains in efficiency remove the need for over-provisioning and in many cases allow you to pass these savings onto the consumer (if you want to).
Heroku is a few seconds:
> It only takes a few seconds to start a one-off dyno process or to scale up a web or worker process.
Lambda created Firecracker to be snappier:
> The duration of a cold start varies from under 100 ms to over 1 second.
I think App Engine is in the same ballpark as Lambda (and predated it). Fly.io uses Firecracker too:
> While Fly Machine cold starts are extremely fast, it still takes a few hundred milliseconds, so it’s still worth weighing the impact it has on performance.
but WASM is yet an order of magnitude faster and cheaper:
> Cloudflare Workers has eliminated cold starts entirely, meaning they need zero spin up time. This is the case in every location in Cloudflare's global network.
WASM is currently limited in what it can do, but if all you're doing is manipulating and serving HTML, it's fantastic at that.
dartos 42 days ago [-]
When lambda came out and serverless started getting big, most scrappy startups hired many frontend devs.
It was the heydays of SPAs, light backends, and thick frontends.
“Serverless” is a great way to say “you don’t need to be a backend dev or even know anything about backend to deploy with us”
And it worked really really well.
Then people realized that they should know a thing or two about backend.
I always really hated that term.
Uehreka 42 days ago [-]
PaaS, Containerization and Serverless are different concepts.
App Engine is PaaS: You provide your app to the service in a runnable form (maybe a container image, maybe not) and they spin up a dedicated server (or slice of a server) to run it continuously.
Lambda is Serverless: You provide them a bit of code and a condition under which that code should run. They charge you only when that thing happens and the code runs. How they make that happen (deploy it to a bajillion servers? Only deploy it when it’s called?) are implementation details that are abstracted from the user/developer as long as Lambda makes sure that the code runs whenever the condition happens.
So with PaaS you have to pay even if you have 0 users, and when you scale up you have to do so by spinning up more “servers” (which may result in servers not being fully utilized). With Serverless you pay for the exact amount of compute you need, and 0 if your app is idle.
chubot 42 days ago [-]
> They charge you only when that thing happens and the code runs.
That's how App Engine worked in 2008, and it looks like it still works that way:
Apps running in the flexible environment are deployed to virtual machine types that you specify. These virtual machine resources are billed on a per-second basis with a 1 minute minimum usage cost.
This applied to both the web workers and the batch workers
It was "serverless" in 2008!
> spin up a dedicated server (or slice of a server) to run it continuously.
Absolutely NOT true of App Engine in 2008, and I'm pretty sure Heroku in 2008 too!
tantalor 42 days ago [-]
I recall you could configure app engine with maximum number of instances you wanted, but you definitely weren't charged if usage was 0. They would start the instances as needed.
The fact that lambda would automatically scale to meet whatever QPS you got sounds terrifying.
friendzis 41 days ago [-]
Serverless is indeed a weird name if you know what you are talking about. I was dumbfounded by the term until I met people who actually thought of anything beyond pushing to git as "the server".
Backend returns 4xx/5xx? The server is down. Particular data is not available in this instance and app handles this error path poorly? The server is down. There is no API to call for this, how do I implement "the server"?
Some people still hold the worldview that application deployment is similar to mod-php where source files are yoloed to live filesytem. In this worldview, ignorant of complexities in operations, serverless is perfectly fitting marketing term, much like Autopilot, first chosen by Musk, chef's kiss.
randomdata 41 days ago [-]
> Serverless is indeed a weird name if you know what you are talking about.
It is a perfectly logical name if you know what you are talking about and are familiar with the history of how these so-called serverless applications used to be developed.
Which is to say that back in the day, once CGI fell out of fashion, the applications became servers themselves. You would have a listening HTTP server right within the application, often reverse proxied through something like Apache or nginx, and that is how it would be exposed to the world. The downside of this model is that your application always needs to be resident in order to serve requests, and, from a scaling perspective, you need to predict ahead of time many server instances are needed to handle the request load. This often resulted in poor resource utilization.
Now with a return to back to the CGI-esq model, where you have managing servers call upon the application through a process-based execution flow, albeit no longer using CGI specifically, the application is no longer the server again. This allows systems to save on resources by killing off all instances of your application when no requests are happening, and, with respect to scalability, it gives the freedom to the system the ability to launch as many instances of your application as is required to handle the load when the requests start coming in.
Hence, with the end of the application being the server under the adoption of said process-based model, the application became serverless.
> I was dumbfounded by the term
The marketers have certainly tried to usurp the term for other purposes. It seems just about everything is trying to be called "serverless" nowadays. Perhaps that is the source of your dumbfoundary? Then again, if you know what you are talking about then you know when marketers are blowing smoke, so...
torginus 42 days ago [-]
Just in Time (JIT) compilation is not possible as dynamic Wasm code generation is not allowed for security reasons.
This sounds.. not right. Honestly,this is an essential feature for allowing workloads like hot reloading code cleanly.
I'm quite convinced the alleged security argument is bull. You can hot reload JS (or even do wilder things like codegen) at runtime without compromising security. Additionally, you can emulate codegen or hot reload, by dynamically reloading the entire Wasm runtime and preserving the memory, but the user experience will be clunky.
I don't see any technical reason why this couldn't be possible. If this were a security measure, it could be trivially bypassed.
Also, WASM bytecode is very similar conceptually to .NET IL, Java bytecode etc., things designed for JIT compilation.
I kind of dislike WASM. It's a project lacking strong direction and will to succeed in a timely manner. First, the whole idea is conceptually unclear, its name suggests that it's supposed to be 'assembly for the web', a machine language for a virtual CPU, but it's actually an intermediate representation meant for compiler backends, with high-level features planned such as GC support.
It's still missing basic features, like the aforementioned hot reload, non-hacking threading, native interfacing with the DOM (without Javascript ideally), low-overhed graphics/compute API support, low-level audio access etc.
You can't run a big multimedia app without major compromises in it.
bhelx 42 days ago [-]
The statement is correct. Wasm cannot mark memory as executable. It's effectively a Harvard Architecture. The code and memory are split. Furthermore you cannot jump to arbitrary points in code. There isn't even a jump instruction.
> I'm quite convinced the alleged security argument is bull. You can hot reload JS (or even do wilder things like codegen) at runtime without compromising security.
JIT here is referring to compiling native code at runtime and executing it. This would be a huge security compromise in the browser or in a wasm sandbox.
> I don't see any technical reason why this couldn't be possible. If this were a security measure, it could be trivially bypassed.
> Also, WASM bytecode is very similar conceptually to .NET IL, Java bytecode etc., things designed for JIT compilation.
Yes, and like with Wasm, the engine is responsible for JITting. But giving the user the power to escape the runtime and emit native code and jump to it is dangerous.
tombl 42 days ago [-]
wasm has no way to remap writable memory as executable, but you can absolutely call back into javascript to instantiate and link a new executable module, like https://github.com/remko/waforth does.
bhelx 42 days ago [-]
Yes, I understand that you can do anything with imports. But that's not part of the Wasm spec. That's a capability the host has decided to give the module. Of course the person with the most privilege can always open holes up, but that capability is not there by default.
flohofwoe 42 days ago [-]
> Just in Time (JIT) compilation is not possible as dynamic Wasm code generation is not allowed for security reasons.
Browsers definitely use a form of JIT-ing for WASM (which is a bit unfortunate, because just as with JITs, you might see slight 'warmup stutter' when running WASM code for the first time - although this has gotten a lot better over the years).
...also I'm pretty sure you can dynamically create a WASM blob in the browser and then dynamically instantiate and run that - not sure if that's possible in other WASM runtimes though, and even in the browser you'll have to reach out Javascript, but that's needed for accessing any sort of 'web API'.
torginus 42 days ago [-]
>Browsers definitely use a form of JIT-ing for WASM
I (and the article) wasn't referring to this kind of JIT. I was referring to the ability to dynamically create or modify methods or load libraries while the app is running (like `DynamicMethod` in .NET).
Afaik WASM even in the browser does not allow modifying the blob after instantiation.
The thing you are referring to puzzles me as well. I initially thought that WASM would be analogous to x86 or ARM asm and would be just another architecture emitted by the compiler. Running it in the browser would just involve a quick translation pass to the native architecture (with usually 1-to-1 mapping to machine instructions) and some quick check to see that it doesn't do anything naughty. Instead it's an LLVM IR analog that needs to be fed into a full-fledged compiler backend.
I'm sure there are good technical reasons as to why it was designed like this, but as you mentioned, it comes with tangible costc like startup time and runtime complexity.
flohofwoe 42 days ago [-]
> Afaik WASM even in the browser does not allow modifying the blob after instantiation.
...not your own WASM blob, but you can build a new WASM blob and run that.
> The thing you are referring to puzzles me as well...
Yes, compilers emit WASM, but that WASM is just a bytecode (similar to JVM or .NET bytecode but even higher level because WASM enforces 'structured control flow') and needs to be compiled to actual machine code on the client before it can run, and this isn't a simple AOT compilation - in browsers at least (it used to be for a while in Firefox, but that caused issues for large projects like Unity games, which might take dozens of seconds to AOT compile).
AFAIK all browsers now use a tiered approach. The WASM-to-machine-code compilation doesn't happen on the whole WASM blob at once, but function by function. For the first time a WASM function is called, a fast compilation will happen which may have slow runtime performance, from then on, 'hot functions' will be compiled with a higher tier backend which does additional optimization, is slow to compile but has better runtime performance - and AFAIK this is also quite similar to how Javascript JIT-ing works.
Also from what I understand WASM compilation is more complex than just translating bytecode instructions to native instructions. It's more like compiling an AST into machine code - at least if you want any performance out of it.
The only difference to JS might be that WASM functions are never 'de-optimized'.
torginus 42 days ago [-]
I feel like I need to be a bit more frank
> WASM is just a bytecode (similar to JVM or .NET bytecode but even higher level ...
Yes, and I think this was a poor engineering choice on behalf of WASM engineering team, instead of using something much closer to actual assembly. And we are grappling with long startup times and lots of compiler infra pushed into the client because of that.
> ...not your own WASM blob, but you can build a new WASM blob and run that.
another baffling limitation, considering you can modify your C#, Java or even native code at runtime.
Unless they are working around some constraint unknown to me, in which case I'd love to know about what it is, they made bad technical decisions in the design.
flohofwoe 42 days ago [-]
> they made bad technical decisions in the design
Considering that the most important design requirement was to have a security model that's good enough for running untrusted code in web browsers at near native performance, I think the WASM peeps did a pretty good job.
Your requirements may be different, but then maybe WASM simply isn't the right solution for you (there are plenty of alternatives outside web browsers after all).
torginus 42 days ago [-]
PNacl also had the same sandboxing requirement, yet had many of the features still missing today from WAsm (threads, 3d graphics API support, access to other native APIs), and it didn't suffer from slow startup times. It had pretty nice and quick uptake considering the tooling was very similar to native toolchains.
According to this benchmark (first Google result I found), it was even faster:
While it might not have been perfect, WASM is yet to catch up in many ways, and some of its limitations might come from its design.
flohofwoe 42 days ago [-]
I had been working both with NaCl and PNaCl back then, and truth be told, once Google made the switch from NaCl to PNaCl most advantages just disappeared. The compilation of the PNaCl bytecode on start (which was more or less just a subset of LLVM IR) took longer than even the first WASM implementations.
PNaCl definitely suffered hard from slow startup times because it ran LLVM for compilation from PNaCl bytecode to native code on startup, and LLVM is slow (I even noticed this compilation process on startup on my absolutely trivial test code). Only the predecessor NaCl didn't suffer from this problem.
There was no 'access to other native APIs', PNaCl created its own set of wrapper APIs to access browser features, and while some of those were better than their standardized web API counterparts, some NaCl/PNaCl APIs were worse than the web APIs they replaced - and for the future, PNaCl would have to create more non-standard APIs for every little feature available in browsers, because:
Integration with the webpage and Javascript was done via message passing, which was just terrible when compared to how easy and fast it is to call between WASM and JS.
The NaCl/PNaCl multithreading feature would have been hit just as hard by Spectre/Meltdown as the SharedArrayBuffer based threading in WASM.
Finally, when you look at the PNaCl toolchain versus Emscripten, Emscripten definitely comes out on top because Emscripten was much more concerned about integrating well with existing build systems and simplify porting of existing code, while NaCl/PNaCl had its own weird build system (in old Google NIH tradition). Working with NaCl/PNaCl felt more like working with the Android NDK, which is pretty much the worst developer experience in the world.
titzer 42 days ago [-]
It's also worth noting that the NaCl and PNaCl teams were integrated into a large Wasm team at Google and brought their expertise to the project. While we didn't all 100% agree on every decision made in Wasm design, we were intimately familiar with the tradeoffs made by those prior projects.
Ultimately the sandboxing requirement of running in-process with the renderer process and integrating with Web APIs like JS dictated hard requirements for security.
jillesvangurp 42 days ago [-]
WASM replaces a language specific vm (javascript) with a general purpose one anywhere javascript vms are currently used. But not exclusively just there. General purpose here means it can run just about anything with a compiler or interpreter for it. Including javascript. So anything, anywhere.
Since it is generally implemented as part of the javascript engine, it inherits a lot of stuff that comes with it like sandboxing and access to the APIs that come with it. Standardizing access to that is a bit of an ongoing process but the end state here is that anything that currently can only be done in Javascript will also be possible in WASM. And a lot more that is currently hard or impossible in Javascript. And it all might run a little faster/smoother.
That makes WASM many things. But the main thing it does is remove a lot of restrictions we've had on environments where Javascript is currently popular. Javascript is a bit of a divisive language. Some people love it, some people hate it. It goes from being the only game in town to being one of many things you can pick to do a thing.
It's been styled as a Javascript replacement, as a docker replacement, as a Java replacement, a CGI replacement (this article), etc. The short version of it is that it is all of these things. And more.
marcyb5st 42 days ago [-]
While I don't have a problem with Javascript, I have a problem with the ecosystem around publishing JS for the web. There are so many tools that do more or less the same thing and whose boundaries are unclear. Additionally, when you eventually manage to get everything working it feels brittle (IMHO). For someone that doesn't do that professionally, it is daunting.
Nowadays, the few times I need to build something for the web I use leptos which has a much nicer DX and even if it didn't reach 1.x yet, it feels more stable that chaining like 5 tools to transpile, uglify, minify, pack, ... your JS bundle.
fallous 42 days ago [-]
This article really does remind me of an old Law of Software that we used to invoke: Any sufficiently large and long-lived application will eventually re-implement the entire software stack it runs on, including the operating system.. and it will re-implement it poorly.
I'm unsure of the source for this Law, but it certainly proves correct more often than not.
PoignardAzur 42 days ago [-]
The witty version is known as Greenspun's tenth rule:
"Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."
The general pattern is called the Inner-Platform Effect.
fallous 42 days ago [-]
YES! The Inner-Platform Effect is exactly what I was trying to dig up through my fossilized neurons. Thank you.
anthk 42 days ago [-]
And a complete TCL spec.
layer8 42 days ago [-]
To expand the premise in the title, to be a true heir to that lineage, I would say that WASM needs to be as easy to host and deploy as PHP applications are (or used to be) on the LAMP stack of any random hosting provider. I suspect that’s not quite the case yet?
thomastjeffery 42 days ago [-]
WASM runs on the browser.. What about hosting do you expect to be different?
tmpz22 42 days ago [-]
A more accessible toolchain for complete beginners.
PHP was literally copy/past code snippets into a file and then upload it to a hosting provider.
I don't build for WASM but I'll bet the money in my pocket to a charity of your choice that its harder for a beginner.
42 days ago [-]
layer8 42 days ago [-]
The article is about WASM on the server, hence the analogy to CGI(-bin) in the title.
thomastjeffery 42 days ago [-]
I see. My fault for not moving from "From CGI to Serverless" to "Wasm on the Server".
fmajid 42 days ago [-]
Like Java and JavaScript before it, WASM can also run on Kubernetes clusters and plenty of other non-browser contexts.
cheema33 42 days ago [-]
I have a different take on this. I think local-first is the future. This is where the apps runs mostly within user's browser with little to no help from the server. Apps like Figma, Linear and Superhuman use this model very successfully. And to some degree Stackblitz does as well.
If somewhat complex apps like Figma can run almost entirely within user's browser, then I think vast majority of the apps out there can. Server side mostly is there to sync data between different instances of the app if the user uses it from different locations.
The tooling for this is in the works, but is not yet mature. e.g Electric-SQL. Once these libraries are mature, I think this space will take off.
Serverless is mostly there to make money for Amazon and Azures of the world and will eventually go the way of the CGI.
WASM could succeed as well. But mostly in user's browser. Microsoft uses it today for C#/Blazor. But it isn't the correct approach as dotnet in browser will likely never be as fast as Javascript in the browser.
llm_trw 42 days ago [-]
>Serverless is mostly there to make money for Amazon and Azures of the world and will eventually go the way of the CGI.
CGI empowers users and small sites. No one talks about it because you can't scale to a trillion add impressions a second on it. Serverless functions add 10 feet to Bazoz's yacht every time someone writes one.
mattdesl 42 days ago [-]
I’m not sure I’d call Figma local first. If I’m offline or in a spotty wifi area, I can’t load my designs. And unless it’s recently changed, if you lose wifi and quit the browser after some edits, they won’t be saved.
curtisblaine 42 days ago [-]
That's intentional: they need you and your data tied to the server to make money. But there's no reason why it couldn't be local first (except the business model), since the bulk of execution is local.
Incidentally, I think that's why local-first didn't take off yet: it's difficult to monetize and it's almost impossible to monetize to the extent of server-based or server-less. If your application code is completely local, software producers are back to copy-protection schemes. If your data is completely local, you can migrate it to another app easily, which is good for the user but bad for the companies. It would be great to have more smaller companies embracing local-first instead of tech behemoths monopolizing resources, but I don't see an easy transition to that state of things.
llm_trw 42 days ago [-]
>Incidentally, I think that's why local-first didn't take off yet
Local first is what we had all throughout the 80s to 10s. It's just that you can make a lot more from people who rent your software rather than buy it.
baq 42 days ago [-]
The sweet, sweet ARR. Investors love it, banks love it, employees should also love it since it makes their paychecks predictable.
It sucks for customers, though.
OtomotO 42 days ago [-]
More and more reliably.
When people have an abo that cannot be quit every month it gives more financial security to the company.
Previously people would buy e.g. the creative suite from Adobe and then work with that version for many, many years to come
curtisblaine 42 days ago [-]
Previously people would crack CS from Adobe then work with that version for many, many years to come :)
llm_trw 42 days ago [-]
Previously amateurs would crack Adobe software and then get a letter telling them they needed to pay or be sued when they went professional.
The cracked software was there to onramp teens into users. Adobe has burned this ramp and now no one under 14 uses it any more which is quite the change from when I was 14.
actionfromafar 42 days ago [-]
True but do all those peeople now pay $100 a month to Adobe? Hardly.
auggierose 42 days ago [-]
If they need what Adobe offers, yes.
pen2l 42 days ago [-]
A better example than Figma is Rive, made with Flutter.
Works well local-first, and syncs with the cloud as needed. Flutter space lends itself very well to making local-first apps that also play well in the cloud.
torginus 42 days ago [-]
Hehehe, so the future is how we used to run applications from before the era of the web.
flohofwoe 42 days ago [-]
Except with runtime safety, no installation process, no pointless scare popups when trying to run an app directly downloaded from the internet, and trivial distribution without random app store publishing rules getting in the way.
In a way - yes - it's almost like it was before the internet, but mostly because other ways to distribute and run applications have become such a hassle, partly for security reasons, but mostly for gatekeeping reasons by the "platform owners".
torginus 42 days ago [-]
Apps like these were incredibly common on Windows from the late 90s-early 2010s era. They could do all this (except for the sandboxing thing). You just downloaded a single .exe file, and it ran self-contained, with all its dependencies statically linked, and it would work on practically any system.
On MacOS, the user facing model is still that you download an application, drop it
in the Applications folder, and it works.
afiori 42 days ago [-]
> They could do all this (except for the sandboxing thing).
The sandbox is very very important, it is the reason I mostly do not worry about clicking random links or pasting random urls in a browser.
There are many apps that I would have liked to try if not for the security risk.
d3VwsX 42 days ago [-]
The download of a single EXE to keep had a nice side-effect though, that it made it trivial to store (most) apps (or their installers) for future use. Not so sure if in-browser apps can do that (yet?) except maybe by saving an entire virtual machine containing the web browser with the app installed.
flohofwoe 42 days ago [-]
> You just downloaded a single .exe file, and it ran self-contained, with all its dependencies statically linked, and it would work on practically any system.
Yeah, but try that today (and even by 2010 that wouldn't work anymore). Windows will show a scare popup with a very hard to find 'run anyway' button, unless your application download is above a certain 'reputation score' or is code-signed with an expensive EV certificate.
> On MacOS, the user facing model is still that you download an application, drop it in the Applications folder, and it works.
Not really, macOS will tell you that it cannot verify that the app doesn't do any harm and helpfully offer to move the application into the trash bin (unless the app is signed and notarized - for which you'll need an Apple developer account, and AFAIK even then there will be a 'mild' warning popup that the app has been downloaded from the internet and whether you want to run it anyway). Apple is definitely nudging developers towards the app store, even on macOS.
consteval 42 days ago [-]
Yes and Windows in that time period had massive issues with security and culture. The culture of downloading and running EXEs from the internet quickly caught up to everyone, and not in a good way.
Also the "big idea" is that those applications aren't portable. Now that primary computers for most people are phones, portable applications are much more important.
bigstrat2003 42 days ago [-]
Except worse, because everything has to run in a gigantic web browser even if it could be a small native app.
adwn 42 days ago [-]
Except better, because it doesn't only work on Windows, and because I don't invite a dozen viruses into my computer.
jauntywundrkind 42 days ago [-]
Every native app has to be run in a gigantic special OS when it could be a small webapps running in a medium sized browser.
Many many ChromeOS (web based consumer OS) laptops are 4GB of ram. You do not want to try that with any normal OSes.
dkersten 42 days ago [-]
That’s because windows is loaded with trash. You can easily run desktop Linux with 4 GB or RAM, and people have been doing it for decades.
VyseofArcadia 42 days ago [-]
But the browser is running in that gigantic special OS. It's not like the OS magically disappears.
jauntywundrkind 42 days ago [-]
I've already mentioned ChromeOS as one counter-example.
SerenityOS and Ladybird browser forked but until recently had a lot of overlap.
LG's WebOS is used on a range of devices, derived from the Palm Pre WebOS released in 2009.
The gigantic special OS is baggage which already has been cut loose numberous times. Yes you can run some fine light Linux OS'es in 4GB but man, having done the desktop install for gnome or kde, they are not small at all, even if their runtime is ok. And most users will then go open a web browser anyways. It's unclear to me why people clutch to the legacy native app world, why this other not-connected mode of computing has such persistent adherency to it. The web ran a fine mobile OS in 2009; Palm Pre rocked. It could today.
VyseofArcadia 42 days ago [-]
I for one don't want to use web apps. I want the speed, convenience, and availability of native apps. I want to use applications that work if the internet isn't. I want to use applications that store my data locally. I want to use unglamorous applications that just work and use a native GUI toolkit instead of torturing a poor, overburdened document display engine into pretending it's a sane place for apps to run.
Not to mention, from the perspective of a developer, the relative simplicity of native apps. Why should I jump through all the hoops of distributed computing to, for example, edit a document in a WYSIWYG editor? This is something I could do comfortably on a Packard Bell in 1992.
consteval 42 days ago [-]
The Web is portable, operating systems are not. Windows and Mac, being short-sighted, did this to themselves. Nobody can agree on anything, Microsoft is constantly deprecating UI frameworks, and it's not convenient at all to write local apps.
It's only JUST NOW we have truly portable UI frameworks. And it's only because of the Web.
Vampiero 42 days ago [-]
The only thing that defines portability is everyone adhering to the same standards.
You say that the web is portable, but really, only Google's vision for the web is relevant, seeing how they have the final say in how the standards are implemented and evolved.
So it's basically another walled garden, only much bigger and not constrained to the CPU architecture and OS kernel.
Chromium IS a platform. And indeed many applications that do work on Chrome don't work on Firefox. So we're pretty much back where we started, but the problem is harder to see because Chrome has such a monopoly over browsers that for most intents and purposes, and for most devs, it's the only platform that exists.
Everyone is good at multiplat when there's only one plat.
VyseofArcadia 42 days ago [-]
QT has been around for decades. So has GTK. Bindings for whatever language you could possibly want. Runs on whatever OS you want. We've had "truly portable" UI frameworks since the late 90s. This has not been an issue for my entire adult life. 20 years ago, I was using desktop applications that ran on Mac OS X, Windows, and *nix with no modifications. They were written in Python, used GTK, and just worked.
Web apps are popular because 1) people don't like installing things anymore for some reason and 2) it's easier to justify a subscription pricing model.
consteval 42 days ago [-]
Even those are not portable because they don't target the #1 personal computer in use - smart phones.
jauntywundrkind 42 days ago [-]
These are all the views of a fossil. Maybe some truth, historically, but years out of date.
Want an offline app? Possible for a long time, build a local-first app. Don't want to build a client-server system? Fine, build an isolated webapps. There's so many great tools for webdev that get people going fast, that are incomparably quick at throwing something together. It's just bias and ignorance of an old crusty complainy world. This is a diseased view, is reprehensible small minded & aggressively mean, and it's absurd given how much incredibly effort has been poured into making HTML and CSS incredibly capable competent featureful fast systems, for shame: torturing a poor, overburdened document display engine into pretending it's a sane place for apps to run
The web has a somewhat earned reputation for being overwhelmed by ads, which slow things down, but today it feels like most native mobile apps are 60MB+ and also have burdensome slow ads too.
There aren't really any tries to go full in on the web. We have been kind of a second system half measure, for the most part, since Pre WebOS gave up on mobile (since FirefoxOS never really got a chance). Apps have had their day and I'm fine with there being offerings for those with a predeliction for prehistoric relics, but the web deserves a real full go, deserves a chance too, and the old salty grudges and mean spirits shouldn't obstruct the hopeful & the excited who have pioneered some really great tech that has both become the most popular connected ubiquitous tech on the planet, but which is also still largely a second system and not the whole of the thing.
The web people are always hopeful & excited & the native app people are always overbearingly negative nellies, old men yelling at the cloud. Yeah, there's some structural issues of power around the cloud today, but as Molly White's recent XOXO talk says, the web is still the most powerful system that all humanity shares that we can use to enrich ourselves however we might dream, and I for one feel great excitement and energy, that this is the only promise I see right now that shows open potential. (I would be overjoyed to see native apps show new promise but they feel tired & their adherents to be displeasurable & backwards looking) https://www.youtube.com/watch?v=MTaeVVAvk-c
VyseofArcadia 42 days ago [-]
These are all the views of someone who is hopelessly naive. Maybe some truth, but ignorant of where we came from and how we got here. This is a diseased view, is reprehensible, small minded, and aggressively mean, and it's absurd given how much complexity has been poured into making computers do simple things in the most complex way possible.
My man, I am not a fossil. I came of age with web apps. But I am someone who has seen both sides. I have worked professionally on both desktop applications and as a full stack web developer, and my informed takeaway is web apps are insane. Web dev is a nightmarish tower of complexity that is antithetical to good engineering practice, and you should only do it if you are working in a problem space that is well and truly web-native.
I try to live by KISS, and nontrivial web apps are not simple. A couple of things to consider:
1. If it is possible to do the same task with a local application, why should I instead do that task with a web app that does everything in a distributed fashion? Unnecessary distributed computing is insane.
2. If it is possible to do the same task with a local application, and as a single application, not client-server, why should I accept the overhead of running it in a browser? Browsers are massive, complex, and resource hungry. Sure, I'll just run my application inside another complex application inside a complex OS. What's another layer? But actually, raw JS, HTML, and CSS are too slow to work with, so I'll add another layer and do it with React. But actually, React is also too slow to work with, so I'll add another layer and do it with Next.js. That's right, we've got frameworks inside of frameworks now. So that's OS -> GUI library -> browser -> framework -> framework framework -> application.
3. The world desperately needs to reduce its energy consumption to reduce the impact of climate change. If we can make more applications local and turn off a few servers, we should.
I am not an old man yelling at the cloud. I am a software engineer who cares deeply about efficient, reliable software, and I am begging, pleading for people to step back for a second and consider whether a simpler mode of application development is sufficient for their needs.
jauntywundrkind 41 days ago [-]
> Browsers are massive, complex, and resource hungry. Sure, I'll just run my application inside another complex application inside a complex OS. What's another layer? But actually, raw JS, HTML, and CSS are too slow to work with, so I'll add another layer and do it with React.
That's just your opinion, and you're overgeneralizing one framework as the only way.
A 2009 mobile phone did pretty damned awesome with the web. The web is quite fast if you use it well. Sites like GitHub and YouTube use web components & can be extremely fast & featureful.
Folks complain about layers of web tech but what's available out of box is incredible. And it's a strength not a weakness that there are many many ways to do webdev, that we have good options & keep refining or making new attempts. The web keeps enduring, having strong fundamentals that allow iteration & exploration. The Extensible Web Manifesto is alive and well, is the cornerstone supporting many different keystone styles of development. https://github.com/extensibleweb/manifesto
It's just your opinion again and again that the web so bad and ke, all without evidence. It's dirty shitty heresay.
Native OSes are massive, complex, and resource hungry and better replaced by the universal hypermedia. We should get rid of the extra layers of non-web that don't help, that are complex and bloated.
wolvesechoes 42 days ago [-]
There is no other industry that is equally driven by fad and buzzword. Try to hide a simple fact that a whole motivation behind SaaS preaching is greed, and bait users with innovative "local-first" option.
It is actually kinda funny to read cries about "enshitiffication" and praises for more web-based bullshittery on the same site, although both are clearly connected and supporting each other. Good material for studying false consciousness among dev proletariat.
smolder 42 days ago [-]
I also support the development of client side applications, but I don't think they should necessarily be run in a browser or sandbox or be bought through an app store, and it's definitely not a new idea.
moi2388 42 days ago [-]
> Microsoft uses it today for C#/Blazor. But it isn't the correct approach as dotnet in browser will likely never be as fast as Javascript in the browser.
Might be true, but both will be more than fast enough. We develop Blazer WASM. When it comes to performance, dotnet is not the issue
josephg 42 days ago [-]
Yep. And when wasmgc is stable & widely adopted, apps built using blazer will probably end up smaller than their equivalent rust+wasm counterparts, since .net apps won’t need to ship an allocator.
jmull 42 days ago [-]
I thought the problem was the hefty upfront price to pay for loading the runtime.
noworriesnate 42 days ago [-]
There's some truth to this, but there's a new way of rendering components on the server and pushing that HTML directly to the browser first. The components render but aren't fully interactive until the WASM comes in. It can make it feel snappy if it doesn't take too long to load the WASM.
csomar 42 days ago [-]
At the end of the day, all you are doing is syncing state with the server. In the future, you'll have a local state and a server state and the only server component is a sync Wasm binary hehe.
Still, you'll be coding your front-end with Wasm/Rust, so get in on the Rust train :)
meow_catrix 42 days ago [-]
Rust frontend dev is not going to become mainstream, no matter what.
bryanrasmussen 42 days ago [-]
metaphorically, Rust train does not sound enticing.
adrianN 42 days ago [-]
CGI is alive and well. It’s still the easiest way to build small applications for browsers.
chgs 42 days ago [-]
Nobody talks about it because people who use it just use it and get on with their life. It’s painfully easy to develop and host.
However it’s likely that generations who weren’t making websites in the days of Matt’s script archive don’t even know about cgi, and end up with massive complex frameworks which go out of style and usability for doing simple tasks.
I’ve got cgi scripts that are over 20 years old which run on modern servers and browsers just as the did during the dot com boom.
consteval 42 days ago [-]
It truly depends on the application. If you have a LOB database-centered application that's pretty much impossible to make "local first".
Figma and other's work because they're mostly client-side applications. But I couldn't, for example, do that with a supply chain application. Or a business monitoring application. Or a ticketing system.
OtomotO 42 days ago [-]
I have a different take on this:
It depends on what you're actually building.
For the business applications I build SSR (without any JS in the stack, but just golang or Rust or Zig) is the future.
It saves resources which in turn saves money, is way more reliable (again: money) and less complex (again: money) to syncing state all the time and having frontend state diverge from the actual (backend) state.
boomskats 42 days ago [-]
I have a different take on this:
Business applications don't care about client side resource utilisation. That resource has already been allocated and spent, and it's not like their users can decide to walk away because their app takes an extra 250ms to render.
Client-side compute is the real money saver. This means CSR/SPA/PWA/client-side state and things like WASM DuckDB and perspective over anything long-lived or computationally expensive on the backend.
jgord 42 days ago [-]
I definitely view the browser as an app delivery system... one of the benefits being you don't have to install and thus largely avoid dependency hell.
Recently I wrote an .e57 file uploader for quato.xyz - choose a local file, parse its binary headers and embedded xml, decide if it has embedded jpg panoramas in it, pull some out, to give a preview .. and later convert them and upload to 'the cloud'.
Why do that ? If you just want a panorama web tour, you only need 1GB of typically 50GB .. pointclouds are large, jpgs less so !
I was kind of surprised that was doable in browser, tbh.
We save annotations and 3D linework as json to a backend db .. but I am looking for an append-only json archive format on cloud storage which I think would be a simpler solution, especially as we have some people self hosting .. then the data will all be on their intranet or our big-name-cloud provider... they will just download and run the "app" in browser :]
silvestrov 42 days ago [-]
> Figma can [...] then I think vast majority of the apps out there can
This doesn't follow. If Figma has the best of the best developers then most businesses might not be able to write just as complex apps.
C++ is a good example of a language that requires high programming skills to be usable at all. This is one of the reasons PHP became popular.
oscargrouch 42 days ago [-]
I worked on something in this space[1], using a heavily modified Chrome browser years ago, but I consider I was too early and I bet something in this lines (probably simpler) will take off when the time is right.
Unfortunately I got a little of a burnout for working some years on it, but I confess I have a more optimized and more to the point version of this. Also having to work on Chrome for this with all its complexity is a bit too much.
So even though is a lot of work, nowadays I think is better to start from scratch and implement the features slowly.
> I think local-first is the future. This is where the apps runs mostly within user's browser with little to no help from the server. Apps like Figma, Linear and Superhuman use this model very successfully.
The problem is: Figma and Linear are not local-first in the way people who are local-first proponents explain local-first. Both of them require a centralized server, that those companies run, for synchronization. This is not what people mean when they talk about "local-first" being the future, they are talking about what Martin Kleppman defined it as, which is no specialized synchronization software required.
jamil7 42 days ago [-]
I work on an iOS app like this right now, it predates a lot of these newer prebuilt solutions. There are some really nice features of working and building features this way, when it works well you can ignore networking code entirely. There are some tradeoffs though and a big one has been debugging and monitoring as well as migrations. There is also some level of end user education because the apps don’t always work the way they’re expecting. The industry the app serves is one in which people are working in the field, doing data entry on a tablet or phone with patchy connections.
createaccount99 42 days ago [-]
The frontend space is moving away from client-side state, not toward it.
bryanrasmussen 42 days ago [-]
the frontend space is always moving in every direction at the same time, this is known as Schrodinger's frontend, depending on when you look at it and what intentions you have - you may think you're looking at the backend.
nwienert 42 days ago [-]
I think you'll find the real long-term movement is to client-side, not away, and that's because it is both a faster and simpler model if done right.
curtisblaine 42 days ago [-]
Some applications are inherently hard to make local-first. Social media and Internet forums come to mind. Heavily collaborative applications maybe too.
swiftcoder 42 days ago [-]
I feel like social media is one of the main things folks want to be local-first. Own your own data, be able to browse/post while offline, and then it all syncs to the big caches in the sky on reconnect...
curtisblaine 42 days ago [-]
But how do you do that without essentially downloading the whole social network to your local machine? Are other people's comments, quotes, likes, moderation signals something that should stay on the server or should be synced to the client for offline use? In the first case, you can't really use the social network without connecting to a server. The second case is a privacy and resources nightmare (privacy, because you can hold posts and comments from users that have deleted their data or banned you, you can see who follows who etc. Resources, because you need to hold the whole social graph in your local client).
swiftcoder 42 days ago [-]
Usually folks looking for this sort of social network are also looking for a more intimate social experience, so we're not necessarily talking about sync'ing the whole Twitter feed firehose.
I don't think it's unreasonable from a resources perspective to sync the posts/actions of mutual followers, and from a privacy standpoint it's not really any worse than your friend screenshotting a text message from you.
curtisblaine 42 days ago [-]
Sure, but they're a tiny fraction of the mainstream users and you can already have that sort of experience with blogging and microblogging. Relevant social networks as the public knows them are hard to develop local-first. Even the humble forum where strangers meet to discuss is really hard to do that way. If it needs centralized moderation, or a relevance system via karma / votes, it's hard.
curtisblaine 42 days ago [-]
(unless you want another paradigm of social networking in which you don't have likes, public follows, replies etc., which won't probably fly because it has a much worse UX compared to established social networks)
lagrange77 42 days ago [-]
> WASM could succeed as well.
I would guess WASM is a big building block of the future of apps you imagine. Figma is a good example.
42 days ago [-]
rpcope1 42 days ago [-]
So basically we're reinventing the JVM and it's ecosystem?
thot_experiment 42 days ago [-]
Sort of yes, but WASM is designed with a different set of constraints in mind that make more sense when you just want to shove the runtime into your whatever. Sometimes reinventing X with lessons learned is actually a great idea.
flohofwoe 42 days ago [-]
In a way yes, except that WASM supports many more languages (e.g. back when I started to look into running C/C++ code in the browser - around 2010 or so - it was absolutely impossible to compile C/C++ to the JVM, which at the time would have been nice because Java Applets still were a thing - of course WASM didn't exist yet either, but Emscripten did, which eventually led to the creation of WASM via asm.js).
epistasis 42 days ago [-]
The JVM is great and all, but that doesn't mean that it is the be-all end-all of the genre. And having mucked with class loaders and writing directly in JVM assembly in the 2000s as part of programming language classes, I'm not sure that the JVM is even a very high point in the genre.
Sure, it allowed a large ecosystem, but holy crap is the whole JVM interface to the external world a clunky mess. For 20+ years I have groaned when encountering anything JVM related.
Comparing the packaging and ecosystem of Rust to that of Python, or shudder C++, shows that reinvention, with lessons learned in prior decades, can be a very very good thing.
singularity2001 42 days ago [-]
except that WASM has a huge classloader / linker problem: It's still very hard to combine two wasm files into one and get the memory merger right. Maybe component model can fix it but it comes with so much bloated nonsense that an adaption in Safari might take forever.
iainmerrick 42 days ago [-]
It's a problem for some use cases, but is it really a "huge" problem in general?
You can't easily publish a library in WASM and link it into another application later. But you can publish it as C++ source (say) and compile it into a C++ application, and build the whole thing as WASM.
What are the scenarios where you really really want libraries in WASM format?
flohofwoe 42 days ago [-]
The only situation I can think of is a plugin system for native applications, where 'WASM DLLs' would solve a lot of issues compared to native DLLs.
But those WASM plugins would be self-contained and wouldn't need to dynamically load other WASM 'DLLs', so that situation is trivial even without the WASM Component Model thingie (which I also think is massively overengineered and kinda pointless - at least from my PoV, maybe other people have different requirements though).
nilslice 42 days ago [-]
this is exactly what we created Extism[0] and XTP[1] for!
XTP is the first (afaik) platform of its kind meant to enable an app to open up parts of its codebase for authorized outside developers to “push” wasm plugin code extensions directly into the app dynamically.
We created a full testing and simulation suite so the embedding app can ensure the wasm plugin code does what it’s supposed to do before the app loads it.
I believe this is an approach to integration/customization that exceeds the capabilities of Webhooks and HTTP APIs.
fwsgonzo 42 days ago [-]
I have to say that yes, it's a PITA. Ever tried to enable exceptions in one part, and disabled in the other? It simply won't load.
Or any other option. Really. So many investigations, so much time wasted.
bhelx 42 days ago [-]
I agree that it's a problem and I definitely agree with the concern about component model. But maybe Wasm doesn't need 1-1 replacement of all capabilities in the native world. At least not right now. As someone who mostly uses it for plug-in systems, this hasn't been a big issue for us.
mlhpdx 42 days ago [-]
Yes, and the .Net CLR, etc.
palmfacehn 42 days ago [-]
If your webserver is already JVM based, there's no context switch between the webserver and the application. Not sure how this would be solved with WASM.
SkiFire13 42 days ago [-]
This doesn't make sense, WASM is supposed to run on the client, which is generally a different machine than the webserver, while a context switch is an event that happens within a single machine.
mlnj 42 days ago [-]
WASM on the server also means that an execution engine that containerizes and runs server code in one of the many languages without the overhead of an entire OS like we do with containers now.
palmfacehn 42 days ago [-]
From the article:
>Wasm on the Server
>Why on earth are we talking about Wasm? Isn't it for the browser?
>And I really hope even my mention of that question becomes dated, but I still hear this question quite often so it's worth talking about. Wasm was initially developed to run high performant code in the web browser.
pjmlp 42 days ago [-]
Yeah, by folks that most likely used to bash Application Servers from early 2000's.
Not only JVM, also CLR, BEAM, P-Code, M-Code, and every other bytecode format since UNCOL came to be in 1958, but lets not forget about the coolness of selling WASM instead.
iforgotpassword 42 days ago [-]
That's a bit oversimplified. I had this thought too and tried to figure out why this is different, and I think there are some major points. The biggest one is in which order they were built and designed. If we take Java and ask why applets didn't take off since they could do everything WASM offers and more, two things come to mind: it was fucking slow on contemporary machines, and the gui framework sucked. WASM is the complete opposite. The gui framework is HTML/CSS, which despite its idiocy in many places had a long time to mature and we've generally came to accept the way it works. Now we just tacked a powerful VM onto it so we don't need to target slow Javascript. There isn't even a new language to learn, just compile whatever you want to WASM, which means you can use a familiar and mature dev environment.
The other point is that WASM is way more open than any of the mentioned predecessors were. They were mostly proprietary crap by vendors who didn't give a shit (flash: security, Microsoft: other platforms) so inevitably someone else would throw their weight around (Apple) to kill them, and with good reason. WASM is part of the browser, so as a vendor you're actually in control regarding security and other things, and are not at the mercy of some lazy entity who doesn't give a damn because they think their product is irreplaceable.
kaba0 42 days ago [-]
Wasm is more open, because we effectively have 1.5 browsers left, and whatever google decides will be the de facto “web standard” everyone should follow. If google were pushing for a slightly revamped jvm/applet model, that would be the standard (as the JVM is as open/standardized as it gets)
iforgotpassword 41 days ago [-]
I don't buy it. WAS is still open from the start, and incredibly more simple and thus easier to implement securely.
And no, for reasons stated before an applet model would never become the standard again. You'd rather have to integrate Java with the browser so it's entirely under your control, and considering how massive it is and how hard it was to properly sandbox it, nobody in their right mind would decide on this. WASM reuses a lot of infrastructure already there, it's simply the best solution from a technical standpoint.
pjmlp 42 days ago [-]
Ironically if it was today instead of 2010, Mozilla refusing to adopt PNaCL would hardly matter.
singularity2001 42 days ago [-]
Any reasonable interaction between WASM and JS/DOM gets postponed seemingly indefinitely though.
pjmlp 42 days ago [-]
Same premise of many other bytecode formats since 1958, a matter of implementation and marketing.
thot_experiment 42 days ago [-]
The coolness of WASM is that I can run WASM on like 99.999% of the targets I care to run code on with zero friction. Everyone (well it's HN so someone is probably on LYNX) reading this page is doing so in a browser with a WASM runtime. That has tremendous value.
anthk 42 days ago [-]
Not Lynx as it doesn't show up the correct layout on comments.
But Dillo works perfectly fine. No JS, no WASM, crazy fast on a n270 netbook.
I can't barely run WASM programs that could be run fine under a Pentium 3-4.
pjmlp 42 days ago [-]
Applies to most bytecode formats, it is a matter of implementation.
marcosdumay 42 days ago [-]
It never applied to any web bytecode formats, and applies to very few local local ones (arguably, none).
It's just a matter of having everybody agree to install the same interpreter, yes. That never happened before.
You're missing the forest for the trees. You already have the bytecode interpreter in front of you and so does everyone else. You are already running it, the difference between "it's definitely already running" and "you could trivially make this work if you put a bit of effort in" is enormous.
marcosdumay 42 days ago [-]
Never happened before.
And your list has no example of anything that was universally installed on everybody's system. The closest is IBM (if you mean x86 opcodes), but code for that one needed to be specialized by OS before it became ubiquitous, and got competitors before its main OS became ubiquitous, and then became ubiquitous again but with 2 main OSes, and then got competitors again.
SkiFire13 42 days ago [-]
All of those bytecode formats were designed to support higher abstractions. WASM on the other hand was born from asm.js, which tried to remove abstraction to make code run faster. Ultimately the goal for WASM was to run code faster, hopefully near native speed, which is not a priority for all the bytecodes you mentioned. If that wasn't needed then Javascript would have been enough.
pjmlp 42 days ago [-]
Revealing lack of knowledge, some of those bytecode formats were designed for low level languages like Pascal, Modula-2, C, C++, among others.
DanielHB 42 days ago [-]
I have been thinking we would be heading for a world where WASM replaces code running lambda functions on the cloud for a long time. WASM is traditionally seen as running on a host platform, but there is no reason it needs to be this way.
Because of the sandbox nature of WASM technically it could even run outside an operating system or in ring0 bypassing a lot of OS overhead.
Compiling to WASM makes a whole range of deployment problems a lot simpler for the user and gives a lot of room for the hosting environment to do optimizations (maybe even custom hardware to make WASM run faster).
superkuh 42 days ago [-]
Anything that requires executing arbitrary untrusted code from arbitrary untrusted sources automatically is bad and is definitely not filling the same role as server side CGI.
openrisk 42 days ago [-]
It is challenging to forecast how client-server architectures would evolve on the basis of technical merit, even if we restrict to "web architectures" (this itself being a bundle of multiple options).
Massive scaling with minimal resources is certainly one important enabler. If you were, e.g., to re-architect wikipedia with the knowledge and hardware of today how would you do it with wasm (on both desktop and mobile). How about a massive multiplayer game etc.
On the other hand you have the constraints and costs of current commercial / business model realities and legacy patterns that create a high bar for any innovation to flurish. But high does not mean infinitely high.
I hate to be the person mentioning AI on every HN thread but its a good example of the long stagnation and then torrential change that is the hallmark of how online connected computing adoption evolves: e.g., we could have had online numerically very intensive apps and API's a long time ago already (LLM's are not the only useful algorithm invented by humankind). But we didnt. It takes engineering a stampede to move the lazy (cash) cows to new grass land.
So it does feel that at some point starting with a fresh canvas might make sense (as in, substantially expand what is possible). When the cruft accumulates sometimes it collapses under its own weight.
slt2021 42 days ago [-]
putting everything in WASM really drains the battery on mobile.
I hate WASM heavy websites as often they have bloat of javascript and site is very slow, especially during scrolling, zooming due to abuse of event listeners and piss poor coding discipline.
I kinda miss sometimes server rendered index.php
thot_experiment 42 days ago [-]
WASM is a double edged sword, if you're compiling fast implementations of heavy lift functions to WASM and calling them in lieu of a JS impl you're going to end up saving battery life.
If you're generating bindings for some legacy disaster and shipping it to clients as a big WASM blob you're going to hell.
kennu 42 days ago [-]
In my view, the big promise of server-side WASM is to have an evergreen platform that doesn't need regular updates to the application. Just like HTML web pages work "forever" in browsers, WASM-based applications could work forever on the server-side.
Currently it is a huge PITA to have to update and redeploy your AWS Lambda apps whenever a Node.js or Python version is deprecated. Of course, usually the old code "just works" in the new runtime version, but I don't want to have to worry about it every few years. I think applications should work forever if you want them to, and WASM combined with serverless like Lambda will provide the right kind of platform for that.
akoboldfrying 42 days ago [-]
I don't know much about Wasm so this was helpful, thanks. It does seem like having the same language on both server and browser must make software delivery more flexible.
>Just in Time (JIT) compilation is not possible as dynamic Wasm code generation is not allowed for security reasons.
I don't follow -- is the Wasm runtime VM forbidden from JITing? (How could such a prohibition even be specified?) Assuming this is the case, I'm surprised that this is considered a security threat, given that TTBOMK JVMs have done this for decades, I think mostly without security issues? (Happy to be corrected, but I haven't heard of any.)
smolder 42 days ago [-]
I kind of like this variety of headline for it's ability to stimulate discussion but it's also nonsense. CGI can be any type of code responding to an individual web request, represented as a set of parameters. It has basically nothing to do with wasm which is meant to be a universal code representation for a universal virtual machine. Have I missed something?
jblecanard 42 days ago [-]
Totally agree there, the article makes complete confusion between the execution model and the tech used to execute. Especially since it says « not CGI as the protocol but as the model ».
As far as model goes, the serverless one is not a different model. It is still a flavor of the CGI concept. But the underlying tech is different. And not that much. It is only serverless for you as a customer. Technically speaking, it runs on servers in micro-VMs.
Those are orthogonal matters, and even if such tech as the middleware mentioned get some wind, the execution model is still the same and is not new.
waynecochran 42 days ago [-]
The use of wasm makes sense to me in context of the article.
smolder 42 days ago [-]
The article does not seem to support the title. You'll have to show me how it does. 'serverless' is a wholly different concept that doesn't have much to do with wasm. You could say it's CGI as a service, but that has nothing to do with wasm.
svieira 42 days ago [-]
It's quite buried amid a lot of extra paragraphs expositing about WASM and the future of serverless functions in general, but the article does contain this quote:
> One of the many effect of how [WASM] modules are isolated is that you can "pause" a module, and save its memory as a data segment. A similar concept to a Snapshot of a virtual machine. You can then start as many copies of the paused module as you like. (As I tell friends, it's like saving your game in an emulator.)
> The snapshotted module has no extra startup time ...
> If we go back to thinking about our Application Server models; this allows us to have a fresh process but without paying the startup costs of a new process. Essentially giving us CGI without the downsides of CGI. Or in more recent terms, serverless without cold starts. This is how Wasm is the new CGI.
smolder 42 days ago [-]
This is not like CGI. Calling it "the new CGI" seems to me like a way to confuse people, since CGI was a response to individual requests and carrying state across requests was always extra work. None of this has to do with WASM in particular.
Sorry, I'll use this rare opportunity to bring up WCGI for Caddy. :-)
It is a Caddy web server plugin that runs CGI applications compiled to Wasm, which includes scripting language runtimes.
The project isn't mine, and I haven't tried it for anything beyond a "Hello, world!".
I think it is a neat hack.
svieira 42 days ago [-]
With CGI the developer of the script could pretend that the-only-thing-which-existed was this request and do all kinds of things that would bring down a persistent process (leak memory, mutate globals, etc.) The problem was that spinning up a process per-request was expensive and slow. Now, with WASM's memory model it becomes possible to have a process that both does all the slow work initialization work once and has the ease-of-reasoning properties of CGI's "a single process for a single request" serving model.
smolder 42 days ago [-]
Edit to say: thanks for your answer. I'll preserve the rest since I still think wheels are being reinvented here.
Bridging state across requests is not new. If "the new CGI" means more efficiently sharing state between requests, that's a really arbitrary qualifier and is not unique to WASM or serverless or anything like that. The article is myopic, it doesn't take into consideration what is established practice done over and over.
Muromec 42 days ago [-]
You might have missed wasi
wokwokwok 42 days ago [-]
What the article actually says:
> If we go back to thinking about our Application Server models; this allows us to have a fresh process but without paying the startup costs of a new process. Essentially giving us CGI without the downsides of CGI. Or in more recent terms, serverless without cold starts. This is how Wasm is the new CGI.
^ It's not a frivolous claim.
> Wasm improves performance, makes process level security much easier, and lowers the cost of building and executing serverless functions. It can run almost any language and with module linking and interface types it lowers the latency between functions incredibly.
^ Not unreasonable.
I don't agree that its necessarily totally 'game changing', but if you read this article and you get to the end and you dont agree with:
> When you change the constraints in a system you enable things that were impossible before.
Then I'm left scratching my head what it was you actually read, or what the heck you're talking about.
> Serverless is mostly there to make money for Amazon and Azures of the world and will eventually go the way of the CGI.
There's... just no possible future, in which AWS and Azure just go away and stop selling something which is making them money, when a new technology comes along and makes it easier, safer and cheaper to it.
> I kind of like this variety of headline for it's ability to stimulate discussion but it's also nonsense. CGI can be any type of code responding to an individual web request, represented as a set of parameters. It has basically nothing to do with wasm
*shakes head sadly...*
...well, time will tell, but for alllll the naysayers, WASM is here to stay and more and more people are using it for more and more things.
Good? Bad? Dunno. ...but it certainly isn't some pointless niche tech that no one cares about is about to disappear.
CGI enabled a lot of things. WASM does too. The comparison isn't totally outrageous. It'll be fun to see where it ends up. :)
anonu 42 days ago [-]
I like the thought. I also think about how Python losing the GIL. If we can write Python to WASM and maintain multi-threading, then the browser is sort of the new "Java JRE"... (to expand on the analogies)
throwaway313373 42 days ago [-]
> The Rack web server interface from the Ruby community eventually made into python via the Flask application server and the WSGI specification.
It's amazing how just one sentence can be so utterly wrong.
WSGI actually predates rack by several years: first WSGI spec was published in 2003 [0], rack was split from Rails in 2007 [1].
Flask is not an "application server", it is one of the web frameworks that implements WSGI interface. Another popular framework that also implements it is Django. Flask is not the first WSGI implementation, so I'm not sure why author decided to mention Flask specifically. It's probably one of the most popular WSGI implementations but there is nothing special about it, it hasn't introduced any new concepts or a new paradigm or anything like that.
I'm not sure if the rest of the article is even worth reading if the author can't even get the basic facts right but for some reason feels the need to make up total nonsense in their place.
And Google probably wanted to ban applets etc because they were negatively impacting search
That doesn’t mean there weren’t good technical reasons, but that’s not necessarily the driver,
For example, ssl is obviously good, but ssl required also raises the cost of making a new site above zero, greatly reducing search spam (a problem that costs billions otherwise).
ram_rattle 42 days ago [-]
I do not understand this, can you please explain
nicce 42 days ago [-]
Probably just a typical cat and mouse game. Some crawlers support React based websites already, for example, so they can render the content and crawl based on that. I believe crawlers do not execute yet the WASM code. But in time, they will.
EGreg 42 days ago [-]
WASM runs on the client side.
WASM is basically the new Microsoft Common Language Runtime, or the new JVM etc.
But OPEN!
pjmlp 42 days ago [-]
Plenty of choices for that, and Wikipedia doesn't list everything if one is willing to dive into computing history.
I disagree. In particular for me the allure for CGI was its simplicity.
Have you played around with WASM in the browser? It involves way too many steps to get it integrated into the web page and to interact with it.
I let chatgpt do the tedious work, have a look at a minimal example:
The part of loading and instantiating the WASM blob is 3 lines of Javascript, and two of those are for the fetch() call. Calling into the WASM module is a regular JS function call. Not sure how this could be simplified much further, it is much simpler than dealing with FFI in other runtime environments (for instance calling into native code from Java or Kotlin on Android).
Tepix 42 days ago [-]
The WASM code doesn't have access to the DOM, if you want to have a web app that interacts with the user (intriguing, isn't it?) you'll end up writing a lot of javascript glue code.
For better or worse, browser APIs have been designed to be used with Javascript so some FFI magic needs to happen when called from other languages, with or without WASM.
And if each web API would automatically come with a C API specification (like WebGPU kinda does for instance), Rust people would complain anyway that they need to talk to an 'archaic' C API instead of a 'modern' Rust API etc etc...
TekMol 42 days ago [-]
I don't see WASM as a significant step forward. In fact, I question its purpose altogether.
Before WASM you could already compile code from other languages into JavaScript. And have the same benefits as you have with WASM.
The only benefit WASM brings is a bit faster execution time. Like twice the speed. Which most applications don't need. And which plain JavaScript offers about two years later because computers become faster.
And you pay dearly for being these two years ahead in terms of execution time. WASM is much more cumbersome to handle than plain JS when it comes to deployment, execution and debugging.
In IT we see it over and over again that saving developer time is more important than saving CPU cycles. So I think chosing WASM over plain JS is a net negative.
tsimionescu 42 days ago [-]
Debugging a Rust program compiled to Javascript is MUCH harder than debugging one compiled to WASM. That is the whole point. And even making the program work when compiled to JS is iffy, as JS has a few breaking constraints, notably that it is single threaded.
Sure, native JS is easier still. But there is a huge wealth of code already written in languages that are not JS. If you want a web app that needs this code, you'll develop it many times faster by compiling the pre-existing code to WASM than by manually rewriting them in JS, and the experience will be significantly better than compiling that code to JS.
thot_experiment 42 days ago [-]
ngl I've tried using Rust -> WASM and it's been an awful experience, I'm much much happier with C. Rust generates enormous blobs because you have to include stdlib, and if you don't you don't get any of the benefits of using Rust. I'm probably overrotating on binary size but it sure is nice being able to just read the WASM and make sense of it, which is generally the case for WASM made from C and is absolutely not the case if you're building from Rust.
therein 42 days ago [-]
Did you run the output through wasm-opt? The size isn't terribly bad. I have a whole complex GUI with realtime charts, based on egui, under 4MB uncompressed. This includes three fonts and even some images.
thot_experiment 42 days ago [-]
Yeah no obviously the size of the stdlib is fixed so as your binary sizes grow it stops mattering.
I'm curious why you're taking the approach you describe, I think compiling entire GUI apps to WASM is the absolute worst thing, so clearly you have a different set of constraints on your work.
therein 42 days ago [-]
Yeah very much different constraints. I would send a screenshot if I knew I could make it public because the results look spectacular. Rendering at 60 to 120FPS, perfectly smooth navigation, rendering even 10k OHLC candles without a hiccup.
thot_experiment 42 days ago [-]
Don't worry I'm ok without having my eyes burned out by the lack of proper subpixel AA on your fonts. :P
10k candles at 120 fps seems like you could absolutely do it in JS alone, though I suppose the app came first and wanting to deploy it to end users via a webpage is an afterthought. Tbh writing performant JS for something like this isn't fun so despite my comments to the contrary you're probably making the right choice here.
therein 42 days ago [-]
> 10k candles at 120 fps seems like you could absolutely do it in JS alone
I think so too. I think everything we have is entirely possible to achieve in JavaScript but you're spot on, writing performant JS like this isn't fun and harder to maintain.
> Don't worry I'm ok without having my eyes burned out by the lack of proper subpixel AA on your fonts. :P
Fair fair. It is definitely happening, more noticeable in certain situations. :)
DanielHB 42 days ago [-]
> Before WASM you could already compile code from other languages into JavaScript. And have the same benefits as you have with WASM.
If you are referring to asm.js you must be joking. asm.js was basically a proof of concept and is worse in every way compared to WASM.
Like parsing time overhead alone makes it a non-option for most large applications.
You seem to imply you should just do it in plain JS instead for "deployment, execution and debugging" benefits. Imagine if you could be free to use those python ML libs in any language of your choice, that alone is enough of an argument. No one is going to reimplement them in JS (or any other environemtn) unless there is a huge ecosystem movement around it.
IshKebab 42 days ago [-]
The days of computers doubling in speed every 2 years are loooong gone.
Look into the history of WASM. They did try compiling everything into JS with asm.js, but then sensibly decided to do things properly. I don't know why anyone would object to proper engineering.
pjmlp 42 days ago [-]
Only because Mozilla refused to adopt PNaCL.
pulse7 42 days ago [-]
When computers become faster, WASM will still be twice the speed of JavaScript, because untyped languages limit the optimizations.
thot_experiment 42 days ago [-]
Bad take. Yes, you can probably optimize a lot of algos in JS such that they are pretty fast, but THAT is cumbersome. I'd much rather write the things I need to go fast in a language that's good at that (I use C for this). I'm currently working on a toolpath optimizer and I'm compiling just the optimizer function to WASM, it's a couple kilobytes and will probably be an order of magnitude faster than the JS implementation while being FAR LESS cumbersome to write. My JS doesn't change at all because i can just call the "native function" from JS, replacing my original JS impl.
TekMol 42 days ago [-]
probably be an order of magnitude
faster than the JS implementation
What makes you think so?
thot_experiment 42 days ago [-]
Off the rip because I didn't spend time to make the JS implementation keep all of it's data in a typed array that I manually manage, because it's tedious to do that in JS and it's straightforward in C. Though I'm betting there are other benefits I'll get from -O2 and static analysis.
TekMol 42 days ago [-]
Compiling your C to WASM might make it run twice as fast as compiling it to JS.
That's all. All other aspects of the workflow are the same.
thot_experiment 42 days ago [-]
I will try but I suspect the final score will be
1. WASM
2. JS handwritten for speed
3. C compiled to JS
and the gaps will be greater than 2x
flohofwoe 42 days ago [-]
You forgot 'C compiled to the asm.js subset of Javascript', that would be on second place right after WASM (the switch from asm.js to WASM was hardly noticeable in my C/C++ code performance-wise - some browsers had special 'fast paths' for the asm.js subset though).
TekMol 42 days ago [-]
Awesome. I will notice when you reply here, no matter when. I routinely check for new replies even to old comments.
xnorswap 42 days ago [-]
Javascript is incredibly well optimised, I'm surprised if there's an order of magnitude difference between JS and WASM without a fundamental difference in algorithm chosen.
thot_experiment 42 days ago [-]
I will likely spend time implementing my solver in several different styles because this is a project I'm tackling largely to make some points about how I think WASM should be used. I'm far from final benchmarks on this but my suspicion is that the gap will be large.
Yes javascript is very well optimized, but as someone who's spent a lot of time writing javascript where speed matters, it's not easy, and it's not predictable. You're at the mercy of arcane optimizations in V8 which might not work for your specific situation because you did something weird, and if you're taking a lot of care not to do anything weird, and manually managing your memory with typed arrays, well, then you might as well write C and compile to WASM.
consteval 42 days ago [-]
When it comes to GC languages they can often appear very fast for use cases that don't use a lot of memory.
If you use an algorithm that near exhausts memory, that's where you'll start seeing that "order of magnitude" difference between JS and something like C++. The same goes for Java and C#.
At low memory utilization, the GC can just put off collection, which saves execution time, so the runtime appears fast. But if you're close to the limit, then the GC has no choice but to pause often before continuing. Not very many algorithms will encounter this, but applications might, depending on what they do.
winternewt 42 days ago [-]
It's difficult or impossible to compile many languages into JavaScript. WASM is more general.
swiftcoder 42 days ago [-]
Do you have a source for this?
asm.js (the spiritual precursor to WASM) worked pretty much the same, and an awful lot of languages were compiled to it.
WASM does provide a more predictable compilation target to be sure, but I don't think it actually opens any new possibilities re what languages can be compiled.
winternewt 42 days ago [-]
Multithreading and 64-bit integers come to mind as creating difficulty, and I imagine "raw" memory buffer access having much higher latency to the point where it's completely impractical. For example, a quick search gave me this library [1] that compiles FFMpeg into Asm.js but the author says it is almost a factor 10 slower. Asm.js would also become extremely verbose for any larger code base (imagine compiling a AAA PC game to Asm.js).
It may be as you say that there are no new theoretical possibilities being opened by WASM, but to me it is a natural step forward to resolve inefficiencies and ergonomic problems in ASM.js and make it all less painful. And hopefully WASM won't be frozen in time either - the platform needs to keep improving to make more use-case scenarios practical.
Theoretically or because of the tooling landscape?
flohofwoe 42 days ago [-]
> WASM is much more cumbersome to handle than plain JS when it comes to deployment, execution and debugging.
For some of us it's much easier than dealing with Javascript though (for instance debugging C/C++ in Visual Studio is much nicer than debugging JS in Chrome - and that's possible by simply building for a native target, and then just cross-compile to WASM - but even the WASM debugging situation has improved dramatically with https://marketplace.visualstudio.com/items?itemName=ms-vscod...)
jamil7 42 days ago [-]
You’re assuming a lot of things in this comment, it seems like you believe every software engineer is working with the same constraints, language and platform as yourself.
TekMol 42 days ago [-]
No. I say we could build the same dev experience to non-js coders by offering them compile-2-js tools instead of compile-2-wasm tools.
jamil7 42 days ago [-]
Not really because then you need a JS environment everywhere you want to run your code. If I write a Rust module I have the possibility to compile to WASM or machine code. This is what I meant in my other comment, your assumption is everyone is making browser apps in Javascript that don't have any performance or resource constraints.
TekMol 42 days ago [-]
possibility to compile to WASM or machine code
How is this better than "possibility to compile to JS or machine code"?
afiori 41 days ago [-]
There are significantly more (and more varied) wasm runtimes than js runtimes.
vbezhenar 42 days ago [-]
You can probably optimize JS to run as fast in most cases.
What actually WASM brings is predictable performance.
If you're JS wizard, you can shuffle code around, using obscure tricks to make current browser to run it really fast. The problem is: JS wizards are rare and tomorrow browser might actually run the same code much slower if some particular optimization changed.
WASM performance is pretty obvious and won't change significantly across versions. And you don't need to be wizard, you just need to know C and write good enough code, plenty of people can do that. Clang will do the rest.
I agree that using WASM instead of JS without reasons probably is not very wise. But people will abuse everything and sometimes it works out, so who knows... The whole modern web was born as abuse of simple language made to blink the text.
JavaScript did deliver its promise of unbreakable sandbox and nowadays browser runs JavaScript, downloaded from any domain without asking user whether he trusts it or not.
WASM builds on JavaScript engine, delivering similar security guarantees.
So there's no fundamental difference between WASM and JVM bytecode. There's only practical difference: WASM proved to be secure and JVM did not.
So now Google Chrome is secure enough for billions of people to safely run evil WASM without compromising their phones, and you can copy this engine from Google Chrome to server and use this strong sandbox to run scripts from various users, which could share resources.
An alternative is to use virtualization. So you can either compile your code to WASM blob and run it in the big WASM server, or you can compile your code to amd64 binary, put it along stripped Linux kernel and run this thing in the VM. There's no clear winner here, I think, for now, there are pros and cons for every approach.
There's more to it than just the sandbox security model. The JVM bytecode doesn't have pointers which has significant performance ramifications for any language with native pointers. This limitation was one of the reasons why the JVM was never a serious compilation target platform for low-level languages like C/C++.
E.g. Adobe compiled their Photoshop C++ code to WASM but not to the JVM to run in a Java JRE nor the Java web applet. Sure, one can twist a Java byte array to act as a flat address space and then "emulate" pointers to C/C++ but this extra layer of indirection which reduces performance wasn't something software companies with C/C++ codebases were interested in. Even though the JVM was advertised as "WORA Write-Once-Run-Anywhere", commercial software companies never deployed their C/C++ apps to the JVM.
In contrast, the motivation for asm.js (predecessor to WASM) was to act as a reasonable and realistic compilation target for C/C++. (https://blog.mozilla.org/luke/2013/03/21/asm-js-in-firefox-n....)
So the WASM-vs-JVM story can't be simplified to "just security" or "just politics". There were actual different technical choices made in the WASM bytecode architecture to enable lower-level languages like C/C++. That's not to say the Sun Java team's technical choices for the JVM bytecode were "wrong"; they just used different assumptions for a different world.
EDIT: seems like yes[1], at least where AWS Lambda is concerned.
[1] https://filia-aleks.medium.com/aws-lambda-battle-2021-perfor...
It is interesting to ask why that is the case, from my point of view the reason is that the JVM standard library is just too damn large. While WASM goes on a lower-level approach of just not having one.
To make WASM have the capabilities required the host (the agent running the WASM code) needs to provide them. For a lot of languages that means using WASI, moving most of the security concerns to the WASI implementation used.
But if you really want to create a secure environment you can just... not implement all of WASI. So a lambda function host environment can, for example, just not implement any filesystem WASI calls because a lambda has no business implementing filesystem stuff.
> An alternative is to use virtualization. So you can either compile your code to WASM blob and run it in the big WASM server, or you can compile your code to amd64 binary, put it along stripped Linux kernel and run this thing in the VM.
I think the first approach gives a lot more room for the host to create optimizations, to the point we could see hardware with custom instructions to make WASM faster. Or custom WASM runtimes heavily tied to the hardware they run on to make better JIT code.
I imagine a future where WASM is treated like LLVM IR
When you throw WASM into the browser, it's access to the outside world is granted by the javascript container that invokes it.
That's very different compared to how other browser extensions operated. The old browser extensions like the JVM or flash were literally the browser calling into a binary blob with full access to the whole platform.
That is why the WASM model is secure vs the JVM model. WASM simply can't interact with the system unless it is explicitly given access to the system from the host calling it. It is even more strictly sandboxed than the Javascript engine which is executing it.
Heh, there were literally CPUs with some support for the JVM! But it turns out that “translating” between different forms is not that expensive (and can be done ahead of time and cached), given that CPUs already use a higher level abstraction of x86/arm to “communicate with us”, while they do something else in the form of microcode. So it didn’t really pay off, and I would wager it wouldn’t pay off with WASM either.
Jazelle, a dark history that ARM never wants to mention again
Aren't its VM implementations routinely exploited? Ranging from "mere" security feature exploits, such as popunders, all the way to full on proper VM escapes?
Like even in current day, JS is ran interpreted on a number of platforms, because JIT compiling is not trustworthy enough. And I'm pretty sure the interpreters are no immune either.
Browser surface attracts the most intense security researcher scrutiny so they do find really wild chains of like 5 exploits that could possibly zero day, but it more reflects just how much scrutiny it has for hardening, realistically anything else will be more exploitable than that, eg your Chromecast playing arbitrarily video streams must he more exploitable than JS on a fully patched Chrome.
It usually require multi 0 day to overcome all those defense and do anything useful. (And it is also the highest glory in defcon)
The browser is surely frequently attacked due to the high rewards. But it also get patched really fast. (As long as you are not using a browser from 10 years ago).
The primary one is its idea of a “capability model” where it basically can’t do any kinds of risky actions (I.e touch the outside world via the network or the file system for example) unless you give it explicit permissions to do so.
Beyond that it has things like memory isolation etc so even an exploit in one module can’t impact another and each module has its own operating environment and permission scope associated with it.
This is true, but unfortunately in the negative sense: both are as insecure as each other, i.e. pwned. [1]
[1] https://discuss.grapheneos.org/d/14344-cellebrite-premium-ju...
A ChromeOS user isn't apt-get installing binaries or copy/pasting bash one liners from Github. If you enable the Linux dev environment, that also runs in an isolated VM with a much more limited attack surface vs say an out of the box Ubuntu install. Both the Android VM and Linux VM can and routinely are blocked by MDM in school or work contexts.
You could lock down a Linux install with SELinux policies and various other restrictions but on ChromeOS it's the default mode that 99% of users are protected by (or limited by depending on your perspective).
To give you a sense of where they were half a decade ago you can already see that it’s as I described miles in front of anything that exists even today in this video: https://youtu.be/pRlh8LX4kQI
When we get to talking about when they went for a total ground up first principles approach with Fuchsia as a next generation operating system that is something else entirely on a different level again.
I genuinely didn’t have a hint of irony in my original comment. They are actually that much better when it comes to security.
They also took much longer to develop than whatever you could cook up in plain html and javascript.
The JVM is not fundamentally insecure the same say as neither is any Turing-complete abstraction like an x86 emulator or so. It’s always the attached APIs that open up new attack surfaces. Since the JVM at the time was used to bring absolutely unimaginable features to the otherwise anemic web, it had to be unsafe to be useful.
Since then, the web improved a huge amount, like a complete online FPS game can literally be programmed in just JS almost a decade ago. If a new VM can just interact with this newfound JS ecosystem and rely on these to be the boundaries it can of couse be made much safer. But it’s not inherently due to this other VM.
This is an oversimplification — there's nothing about the JVM bytecode architecture making it insecure. In fact, it is quite simpler as an architecture than WASM.
Applets were just too early (you have to remember what the state of tech looked like back then), and the implementation was of poor quality to boot (owing in part to some technical limitations — but not only).
But worst of all, it just felt jank. It wasn't really part of the page, just a little box in it, that had no connection to HTML, the address bar & page history, or really anything else.
The Javascript model rightfully proved superior, but there was no way Sun could have achieved it short of building their own browser with native JVM integration.
Today that looks easy, just fork Chromium. But back then the landscape was Internet Explorer 6 vs the very marginal Mozilla (and later Mozilla Firefox) and proprietary Opera that occasionally proved incompatible with major websites.
For example, Java was the first mainstream language with built-in threading and that resulted in a pile of concurrency bugs. Porting Java to a new platform was not easy because it often required fixing threading bugs in the OS. By contrast, JavaScript and WASM (in the first version) are single-threaded. For JavaScript it was because it was written in a week, but for WASM, they knew from experience to put off threading to keep things simple.
Java also has a class loader, a security manager that few people understand and sensitive native methods that relied on stack-walking to make sure they weren’t called in the wrong place. The API at the security boundary was not well-designed.
A lot of this is from being first at a lot of things and being wildly ambitious without sufficent review, and then having questionable decisions locked in by backward compatibility concerns.
Your timeline is off by about five years. Java support shipped with Netscape Navigator 2 in 1995, and 95/96/97 is when Java hype and applet experimentation peaked.
Netscape dominated this era. IE6 wouldn’t come out until 2001 and IE share generally wouldn’t cross 50% until 2000 https://en.m.wikipedia.org/wiki/File:Internet-explorer-usage...
By the time Mozilla spun up with open sourced Netscape code, Java in the browser was very much dead.
You nailed the other stuff though.
(Kind of an academic point but I’m curious if Java browser/page integration was much worse than JavaScript in those days. Back then JS wasn’t very capable itself and Netscape was clearly willing to work to promote Java, to the point of mutilating and renaming the language that became JavaScript. I’m not sure back then there was even the term or concept of DOM, and certainly no AJAX. It may be a case of JavaScript just evolving a lot more because applets were so jank as to be DOA)
The practical reasons have more to do with how the JVM was embedded in browsers than the actual technology itself (though Flash was worse in this regard). They were linked at binary level and had same privileges as the containing process. With the JS VM the browser has a lot more control over I/O since the integration evolved this way from the start.
I'm sure there's a big long list of WebKit exploits somewhere that will contradict that sentence...
[1] https://bughunters.google.com/about/rules/chrome-friends/574...
Unlike the JVM, WASM offers linear memory, and no GC by default, which makes it a much better compilation target for a broader range of languages (most common being C and C++ through Emscripten, and Rust).
> Maybe I’m just old, but I thought we’d learnt our lesson on running untrusted third party compiled code in a web browser.
WASM is bytecode, and I think most implementations share a lot of their runtime with the host JavaScript engine.
> In all of these cases it’s pitched as improving the customer experience but also conveniently pushes the computational cost from server to client.
The whole industry has swung from fat clients to thin clients and back since time immemorial. The pendulum will keep swinging after this too.
Indeed, graphics pioneer and all-around-genius Ivan Sutherland observed (and named) this back in 1968:
"wheel of reincarnation "[coined in a paper by T.H. Myer and I.E. Sutherland On the Design of Display Processors, Comm. ACM, Vol. 11, no. 6, June 1968)] Term used to refer to a well-known effect whereby function in a computing system family is migrated out to special-purpose peripheral hardware for speed, then the peripheral evolves toward more computing power as it does its job, then somebody notices that it is inefficient to support two asymmetrical processors in the architecture and folds the function back into the main CPU, at which point the cycle begins again.
"Several iterations of this cycle have been observed in graphics-processor design, and at least one or two in communications and floating-point processors. Also known as the Wheel of Life, the Wheel of Samsara, and other variations of the basic Hindu/Buddhist theological idea. See also blitter."
https://www.catb.org/jargon/html/W/wheel-of-reincarnation.ht...
More like fads sold to milk even more money from people.
For the purposes of OP's question, the memory model difference is one of the key reasons why you might want to use wasm instead of a java applet.
Opt-in or not, it is there on the runtime.
- Wasm has verification specification that wasm bytecode must comply to. This verified subset makes security exploits seen in those older technologies outright impossible. Attacks based around misbehaving hardware like heartbleed or rowhammer might still be possible, but you, eg, can't reference memory outside of your wasm's memory by tricking the VM to interpret a number you have as a pointer to memory that doesn't belong to you.
- Wasm bytecode is trivial (as it gets) to turn into machine code. So implementations can be smaller and faster than using a VM.
- Wasm isn't owned by a specific company, and has an open and well written specification anyone can use.
- It has been adopted as a web standard, so no browser extensions are required.
As for computation on clients versus serves, that's already true for Javascript. More true in fact, since wasm code can be efficient in ways that are impossible for Javascript.
As far as I understand, in WASM memory is a linear blob, so if I compile C++ to WASM, isn't it possible to reference a random segment of memory (say, via an unchecked array index exploit) and then do whatever you want with it (exploit other bugs in the original C++ app). The only benefit is that access to the OS is isolated, but all the other exploits are still possible (and impossible in JVM/.NET).
Am I missing something?
Also see: https://www.usenix.org/conference/usenixsecurity20/presentat...
„We find that many classic vulnerabilities which, due to common mitigations, are no longer exploitable in native binaries, are completely exposed in WebAssembly. Moreover, WebAssembly enables unique attacks, such as overwriting supposedly constant data or manipulating the heap using a stack overflow.”
My understanding is that people talking about wasm being more secure mostly talk about the ability to escape the sandbox or access unintended APIs, not integrity of the app itself.
We're mostly concerned with being able to visit a malicious site, and execute wasm from that site without that wasm being able to execute arbitrary code on the host - breaking out of the sandbox in order to execute malware. You say the only benefit is that access to the OS is isolated, but that's the big benefit.
Having said that, WebAssembly has some design decisions that make your exploits significantly more difficult in practice. The call stack is a separate stack from WebAssembly memory that's effectively invisible to the running WebAssembly program, so return oriented programming exploits should be impossible. Also WebAssembly executable bytecode is separate from WebAssembly memory, making it impossible to inject bytecode via a buffer overflow + execute it.
If you want to generate WebAssembly code at runtime, link it in as a new function, and execute it, you need participation from the host, e.g. https://wingolog.org/archives/2022/08/18/just-in-time-code-g...
WASM, in broswers, runs entirely inside a secure environment with no access to the system.
vs further. WASM and Js are in their own process with no os acesss. they can't access the os except by rpc to the broswerflash/java tho, ran all user code in the same process with full access to the os
chrome started with that but also started without GPU based graphics and spent 2-3 years adding yet another process make it possible. mozilla and safari took almost 10 years to catch up.
Both Java and .NET verify their bytecode.
>Wasm bytecode is trivial (as it gets) to turn into machine code
JVM and .NET bytecodes aren't supercomplicated either.
Probably the only real differences are: 1) WASM was designed to be more modular and slimmer from the start, while Java and .NET were designed to be fat; currently there are modularization efforts, but it's too late 2) WASM is an open standard from the start and so browser vendors implement it without plugins
Other than that, it feels like WASM is a reinvention of what already existed before.
The difference is in the surface area of the standard library -- Java applets exposed a lot of stuff that turned out to have a lot of security holes, and it was basically impossible to guarantee there weren't further holes. In WASM, the linear memory and very simple OS interface makes the sandboxing much more tractable.
What I’m thinking of is simply memory corruption issues from the linear memory model, and while these can only corrupt the given process, not anything outside, it is still not something the JVM allows.
Wasm GC also introduces non-null reference types, and the validation algorithm guarantees that locals of declared non-null type cannot be used before being initialized. That's also done as part of the single-pass verification.
Wasm GC has a lower-level object model and type system than the JVM (basically structs, arrays, and first-class functions, to which object models are lowered), so it's possible that a higher-level type system, when lowered to Wasm GC, may not be enforceable at the bytecode level. So you could, e.g. screw up the virtual dispatch sequence of a Java method call and end up with a Wasm runtime type error.
> Two’s complement signed integers in 32 bits and optionally 64 bits.
https://webassembly.org/docs/portability/#assumptions-for-ef...
And nothing suggesting unsigned ints here:
https://webassembly.org/features/
With the two's complement convention, the concept of 'signedness' only matters when a narrow integer value needs to be extended to a wider value (e.g. 8-bit to 16-bit), specifically whether the new bits needs to be replicated from the narrow value's topmost bit (for signed extension) or set to zero (for unsigned extension).
It would be interesting to speculate what a high level language would look like with such sign-agnostic "Schroedinger's integer types").
https://en.wikibooks.org/wiki/X86_Assembly/Shift_and_Rotate
https://webassembly.github.io/spec/core/appendix/index-instr...
See how there's only i32.load and i32.eq, but there's i32.lt_u and i32.lt_s. Loading bits from memory or comparing them is the same operation bit for bit for each of signed and unsigned. However, less than requires knowing the desired signess, and is split between signed and unsigned.
WASM makes that safe, and that's the whole point. It doesn't increase the attack surface by much compared to running Javascript code in the browser, while the alternative solutions where directly poking through into the operating system and bypassing any security infrastructure of the browser for running untrusted code.
Java was an outsider trying to get in.
The difference is not in the nature of things, but rather who championed it.
And otherwise, WASM is different in two ways.
For one, browsers have gotten pretty good at running untrusted 3rd party code safely, which Flash or the JVM or IE or.NET were never even slightly adequate for.
The other difference is that WASM is designed to allow you to take a program in any language and run it in the user's browser. The techs you mention were all available for a single language, so if you already had a program in, say, Python, you'd have to re-write it in Java or C#, or maybe Scala or F#, to run it as an applet or Silverlight program.
From 2001,
"More than 20 programming tools vendors offer some 26 programming languages — including C++, Perl, Python, Java, COBOL, RPG and Haskell — on .NET."
https://news.microsoft.com/2001/10/22/massive-industry-and-d...
This WebAssembly marketing is incredible.
There never was a wasm vs applet debate.
> Nobody banned Flash.
What happened first? Chrome dropping support for flash, or flash stopped making updates?
- The security model (touched on by other comments in this thread)
- The Component Model. This is probably the hardest part to wrap your head around, but it's pretty huge. It's based on a generalization of "libraries" (which export things to be consumed) to "worlds" (which can both export and import things from a "host"). Component modules are like a rich wrapper around the simpler core modules. Having this 2-layer architecture allows far more compilers to target WebAssembly (because core modules are more general than JVM classes), while also allowing modules compiled from different ecosystems to interoperate in sophisticated ways. It's deceivingly powerful yet also sounds deceivingly unimpressive at the same time.
- It's a W3C standard with a lot of browser buy-in.
- Some people really like the text format, because they think it makes Wasm modules "readable". I'm not sold on that part.
- Performance and the ISA design are much more advanced than JVM.
It's just an IDL, IDL's have been around a long time and have been used for COM, Java, .NET, etc.
As well as the security model differences other are debating, and WASM being an open standard that is easy to implement and under no control from a commercial entity, there is a significant difference in scope.
WebAssemply is just the runtime that executes byte-code compiled code efficiently. That's it. No large standard run-time (compile in everything you need), no UI manipulation (message passing to JS is how you affect the DOM, and how you ready DOM status back), etc. It odes one thing (crunch numbers, essentially) and does it well.
The issue with those older technologies was that the runtime itself was a third-party external plugin you had to trust, and they often had various security issues. WASM however is an open standard, so browser manifacturers can directly implement it in browser engines without trusting other third-parties. It is also much more restricted in scope (less abstractions mean less work to optimize them!) which helps reducing the attack surface.
That is nonsense. WASM and JS have the exact same performance boundaries in a browser because the same VM runs them. However, WASM allows you to use languages where it's easier to stay on a "fast-path".
WASM on its own isn't anything special security-wise. You could modify Java to be as secure or actually more secure just by stripping out features, as the JVM is blocking some kinds of 'internal' security attacks that WASM only has mitigations for. There have been many sandbox escapes for WASM and will be more, for example this very trivial sandbox escape in Chrome:
https://microsoftedge.github.io/edgevr/posts/Escaping-the-sa...
... is somewhat reminiscent of sandbox escapes that were seen in Java and Flash.
But! There are some differences:
1. WASM / JS are minimalist and features get added slowly, only after the browser makers have done a lot of effort on sandboxing. The old assumption that operating system code was secure is mostly no longer held whereas in the Flash/applets/pre-Chrome era, it was. Stuff like the Speech XML exploit is fairly rare, whereas for other attempts they added a lot of features very fast and so there was more surface area for attacks.
2. There is the outer kernel sandbox if the inner sandbox fails. Java/Flash didn't have this option because Windows 9x didn't support kernel sandboxing, even Win2K/XP barely supported it.
3. WASM / JS doesn't assume any kind of code signing, it's pure sandbox all the way.
Also no corporate overlord control.
Obsuscation and transpilation are not new in jsland
Google App Engine (2008) predates Lambda (2014) by 6 years!
I was never quite sure why we got the name “serverless”, or where it came from, since there were many such products a few years before, and they already had a name
App engine had both batch workers and web workers too, and Heroku did too
They were both pre-docker, and maybe that makes people think they were different? But I think lambda didn’t launch with docker either
Serverless refers to the software not being a server (usually implied to be a HTTP server), as was the common way to expose a network application throughout the 2010s, instead using some other process-based means to see the application interface with an outside server implementation. Hence server-less.
It's not a new idea, of course. Good old CGI is serverless, but CGI defines a specific protocol whereas serverless refers to a broad category of various implementations.
So, you could run a CGI script on a serverless platform, or a "serverful" one. You could even run it locally.
https://en.wikipedia.org/wiki/Serverless_computing
Per wikipedia: "Serverless is a misnomer in the sense that servers are still used by cloud service providers to execute code for developers. However, developers of serverless applications are not concerned with capacity planning, configuration, management, maintenance, fault tolerance, or scaling of containers, virtual machines, or physical servers."
You don't care about the specific machine, the OS kernel, the distro, the web server, or SSL certificates when you're doing "serverless"
And the SAME was true of "PaaS"
This whole subthread just proves that the cloud is a mess -- nobody knows what "serverless" is or that App Engine / Heroku already had it in 2008 :)
No, "server" most definitely refers to software that listens for network requests. Colloquially, hardware that runs such software is often also given the server moniker ("the computer running the server" is a mouthful), but that has no applicability within the realm of discussion here. If you put the user in front of that same computer with a keyboard and mouse controlling a GUI application, it would no longer be considered a server. We'd call it something like a desktop. It is the software that drives the terminology.
> nobody knows what "serverless" is or that App Engine / Heroku already had it in 2008 :)
Hell, we were doing serverless in the 90s. You uploaded your CGI script to the provider and everything else was their problem.
The difference back then was that everyone used CGI, and FastCGI later on, so we simply called it CGI. If you are old enough to recall, you'll remember many providers popped up advertising "CGI hosting". Nowadays it is a mishmash of proprietary technologies, so while technically no different than what we were doing with CGI back in the day, it isn't always built on literal CGI. Hence why serverless was introduced as a more broad term to capture the gamut of similar technologies.
https://fly.io/blog/the-serverless-server/
Pretty sure Lambda has an option for that too -- you are responsible for the HTTP server, which is proxied, yet it is still called serverless
---
On the second point, I wrote a blog post about that - https://www.oilshell.org/blog/2024/06/cgi.html
It would make for a much more interesting conversation if you cite some definitions/sources, as others have done here, rather than merely insisting that everyone thinks of the terms as you think of them
Right, with the quotes being theirs. Meaning even they recognize that it isn't serverless-proper, just a blatant attempt at gaining SEO attention in an effort to advertise their service. It is quite telling when an advertisement that explicitly states right in it it has nothing to do with serverless is the best you could come up with.
I also can't come up with one that's significantly better.
But pedantically, serverless is to be taken literally. It implies that there is no server in your application.
The move away from CGI/FastCGI/SCGI to the application being the server was a meaningful shift in how web applications were developed. Now that we've started adopting the server back out of the application in favour of the process-based model again, albeit now largely through propriety protocols instead of a standard like CGI, serverless has come into use in recognition of that. We don't want to go back to calling it CGI because CGI is no longer the protocol du jour.
Heroku is a few seconds:
> It only takes a few seconds to start a one-off dyno process or to scale up a web or worker process.
Lambda created Firecracker to be snappier:
> The duration of a cold start varies from under 100 ms to over 1 second.
I think App Engine is in the same ballpark as Lambda (and predated it). Fly.io uses Firecracker too:
> While Fly Machine cold starts are extremely fast, it still takes a few hundred milliseconds, so it’s still worth weighing the impact it has on performance.
but WASM is yet an order of magnitude faster and cheaper:
> Cloudflare Workers has eliminated cold starts entirely, meaning they need zero spin up time. This is the case in every location in Cloudflare's global network.
WASM is currently limited in what it can do, but if all you're doing is manipulating and serving HTML, it's fantastic at that.
It was the heydays of SPAs, light backends, and thick frontends.
“Serverless” is a great way to say “you don’t need to be a backend dev or even know anything about backend to deploy with us”
And it worked really really well.
Then people realized that they should know a thing or two about backend.
I always really hated that term.
App Engine is PaaS: You provide your app to the service in a runnable form (maybe a container image, maybe not) and they spin up a dedicated server (or slice of a server) to run it continuously.
Lambda is Serverless: You provide them a bit of code and a condition under which that code should run. They charge you only when that thing happens and the code runs. How they make that happen (deploy it to a bajillion servers? Only deploy it when it’s called?) are implementation details that are abstracted from the user/developer as long as Lambda makes sure that the code runs whenever the condition happens.
So with PaaS you have to pay even if you have 0 users, and when you scale up you have to do so by spinning up more “servers” (which may result in servers not being fully utilized). With Serverless you pay for the exact amount of compute you need, and 0 if your app is idle.
That's how App Engine worked in 2008, and it looks like it still works that way:
https://cloud.google.com/appengine/pricing
Apps running in the flexible environment are deployed to virtual machine types that you specify. These virtual machine resources are billed on a per-second basis with a 1 minute minimum usage cost.
This applied to both the web workers and the batch workers
It was "serverless" in 2008!
> spin up a dedicated server (or slice of a server) to run it continuously.
Absolutely NOT true of App Engine in 2008, and I'm pretty sure Heroku in 2008 too!
The fact that lambda would automatically scale to meet whatever QPS you got sounds terrifying.
Backend returns 4xx/5xx? The server is down. Particular data is not available in this instance and app handles this error path poorly? The server is down. There is no API to call for this, how do I implement "the server"?
Some people still hold the worldview that application deployment is similar to mod-php where source files are yoloed to live filesytem. In this worldview, ignorant of complexities in operations, serverless is perfectly fitting marketing term, much like Autopilot, first chosen by Musk, chef's kiss.
It is a perfectly logical name if you know what you are talking about and are familiar with the history of how these so-called serverless applications used to be developed.
Which is to say that back in the day, once CGI fell out of fashion, the applications became servers themselves. You would have a listening HTTP server right within the application, often reverse proxied through something like Apache or nginx, and that is how it would be exposed to the world. The downside of this model is that your application always needs to be resident in order to serve requests, and, from a scaling perspective, you need to predict ahead of time many server instances are needed to handle the request load. This often resulted in poor resource utilization.
Now with a return to back to the CGI-esq model, where you have managing servers call upon the application through a process-based execution flow, albeit no longer using CGI specifically, the application is no longer the server again. This allows systems to save on resources by killing off all instances of your application when no requests are happening, and, with respect to scalability, it gives the freedom to the system the ability to launch as many instances of your application as is required to handle the load when the requests start coming in.
Hence, with the end of the application being the server under the adoption of said process-based model, the application became serverless.
> I was dumbfounded by the term
The marketers have certainly tried to usurp the term for other purposes. It seems just about everything is trying to be called "serverless" nowadays. Perhaps that is the source of your dumbfoundary? Then again, if you know what you are talking about then you know when marketers are blowing smoke, so...
This sounds.. not right. Honestly,this is an essential feature for allowing workloads like hot reloading code cleanly.
I'm quite convinced the alleged security argument is bull. You can hot reload JS (or even do wilder things like codegen) at runtime without compromising security. Additionally, you can emulate codegen or hot reload, by dynamically reloading the entire Wasm runtime and preserving the memory, but the user experience will be clunky.
I don't see any technical reason why this couldn't be possible. If this were a security measure, it could be trivially bypassed.
Also, WASM bytecode is very similar conceptually to .NET IL, Java bytecode etc., things designed for JIT compilation.
I kind of dislike WASM. It's a project lacking strong direction and will to succeed in a timely manner. First, the whole idea is conceptually unclear, its name suggests that it's supposed to be 'assembly for the web', a machine language for a virtual CPU, but it's actually an intermediate representation meant for compiler backends, with high-level features planned such as GC support. It's still missing basic features, like the aforementioned hot reload, non-hacking threading, native interfacing with the DOM (without Javascript ideally), low-overhed graphics/compute API support, low-level audio access etc. You can't run a big multimedia app without major compromises in it.
> I'm quite convinced the alleged security argument is bull. You can hot reload JS (or even do wilder things like codegen) at runtime without compromising security.
JIT here is referring to compiling native code at runtime and executing it. This would be a huge security compromise in the browser or in a wasm sandbox.
> I don't see any technical reason why this couldn't be possible. If this were a security measure, it could be trivially bypassed.
It's not because it's baked into the design and instruction set. You can read some more about how it works here: https://webassembly.org/docs/security/
> Also, WASM bytecode is very similar conceptually to .NET IL, Java bytecode etc., things designed for JIT compilation.
Yes, and like with Wasm, the engine is responsible for JITting. But giving the user the power to escape the runtime and emit native code and jump to it is dangerous.
Browsers definitely use a form of JIT-ing for WASM (which is a bit unfortunate, because just as with JITs, you might see slight 'warmup stutter' when running WASM code for the first time - although this has gotten a lot better over the years).
...also I'm pretty sure you can dynamically create a WASM blob in the browser and then dynamically instantiate and run that - not sure if that's possible in other WASM runtimes though, and even in the browser you'll have to reach out Javascript, but that's needed for accessing any sort of 'web API'.
I (and the article) wasn't referring to this kind of JIT. I was referring to the ability to dynamically create or modify methods or load libraries while the app is running (like `DynamicMethod` in .NET).
Afaik WASM even in the browser does not allow modifying the blob after instantiation.
The thing you are referring to puzzles me as well. I initially thought that WASM would be analogous to x86 or ARM asm and would be just another architecture emitted by the compiler. Running it in the browser would just involve a quick translation pass to the native architecture (with usually 1-to-1 mapping to machine instructions) and some quick check to see that it doesn't do anything naughty. Instead it's an LLVM IR analog that needs to be fed into a full-fledged compiler backend.
I'm sure there are good technical reasons as to why it was designed like this, but as you mentioned, it comes with tangible costc like startup time and runtime complexity.
...not your own WASM blob, but you can build a new WASM blob and run that.
> The thing you are referring to puzzles me as well...
Yes, compilers emit WASM, but that WASM is just a bytecode (similar to JVM or .NET bytecode but even higher level because WASM enforces 'structured control flow') and needs to be compiled to actual machine code on the client before it can run, and this isn't a simple AOT compilation - in browsers at least (it used to be for a while in Firefox, but that caused issues for large projects like Unity games, which might take dozens of seconds to AOT compile).
AFAIK all browsers now use a tiered approach. The WASM-to-machine-code compilation doesn't happen on the whole WASM blob at once, but function by function. For the first time a WASM function is called, a fast compilation will happen which may have slow runtime performance, from then on, 'hot functions' will be compiled with a higher tier backend which does additional optimization, is slow to compile but has better runtime performance - and AFAIK this is also quite similar to how Javascript JIT-ing works.
Also from what I understand WASM compilation is more complex than just translating bytecode instructions to native instructions. It's more like compiling an AST into machine code - at least if you want any performance out of it.
The only difference to JS might be that WASM functions are never 'de-optimized'.
> WASM is just a bytecode (similar to JVM or .NET bytecode but even higher level ...
Yes, and I think this was a poor engineering choice on behalf of WASM engineering team, instead of using something much closer to actual assembly. And we are grappling with long startup times and lots of compiler infra pushed into the client because of that.
> ...not your own WASM blob, but you can build a new WASM blob and run that.
another baffling limitation, considering you can modify your C#, Java or even native code at runtime.
Unless they are working around some constraint unknown to me, in which case I'd love to know about what it is, they made bad technical decisions in the design.
Considering that the most important design requirement was to have a security model that's good enough for running untrusted code in web browsers at near native performance, I think the WASM peeps did a pretty good job.
Your requirements may be different, but then maybe WASM simply isn't the right solution for you (there are plenty of alternatives outside web browsers after all).
According to this benchmark (first Google result I found), it was even faster:
https://apryse.com/blog/wasm/wasm-vs-pnacl
While it might not have been perfect, WASM is yet to catch up in many ways, and some of its limitations might come from its design.
PNaCl definitely suffered hard from slow startup times because it ran LLVM for compilation from PNaCl bytecode to native code on startup, and LLVM is slow (I even noticed this compilation process on startup on my absolutely trivial test code). Only the predecessor NaCl didn't suffer from this problem.
There was no 'access to other native APIs', PNaCl created its own set of wrapper APIs to access browser features, and while some of those were better than their standardized web API counterparts, some NaCl/PNaCl APIs were worse than the web APIs they replaced - and for the future, PNaCl would have to create more non-standard APIs for every little feature available in browsers, because:
Integration with the webpage and Javascript was done via message passing, which was just terrible when compared to how easy and fast it is to call between WASM and JS.
The NaCl/PNaCl multithreading feature would have been hit just as hard by Spectre/Meltdown as the SharedArrayBuffer based threading in WASM.
Finally, when you look at the PNaCl toolchain versus Emscripten, Emscripten definitely comes out on top because Emscripten was much more concerned about integrating well with existing build systems and simplify porting of existing code, while NaCl/PNaCl had its own weird build system (in old Google NIH tradition). Working with NaCl/PNaCl felt more like working with the Android NDK, which is pretty much the worst developer experience in the world.
Ultimately the sandboxing requirement of running in-process with the renderer process and integrating with Web APIs like JS dictated hard requirements for security.
Since it is generally implemented as part of the javascript engine, it inherits a lot of stuff that comes with it like sandboxing and access to the APIs that come with it. Standardizing access to that is a bit of an ongoing process but the end state here is that anything that currently can only be done in Javascript will also be possible in WASM. And a lot more that is currently hard or impossible in Javascript. And it all might run a little faster/smoother.
That makes WASM many things. But the main thing it does is remove a lot of restrictions we've had on environments where Javascript is currently popular. Javascript is a bit of a divisive language. Some people love it, some people hate it. It goes from being the only game in town to being one of many things you can pick to do a thing.
It's been styled as a Javascript replacement, as a docker replacement, as a Java replacement, a CGI replacement (this article), etc. The short version of it is that it is all of these things. And more.
Nowadays, the few times I need to build something for the web I use leptos which has a much nicer DX and even if it didn't reach 1.x yet, it feels more stable that chaining like 5 tools to transpile, uglify, minify, pack, ... your JS bundle.
I'm unsure of the source for this Law, but it certainly proves correct more often than not.
"Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."
The general pattern is called the Inner-Platform Effect.
PHP was literally copy/past code snippets into a file and then upload it to a hosting provider.
I don't build for WASM but I'll bet the money in my pocket to a charity of your choice that its harder for a beginner.
If somewhat complex apps like Figma can run almost entirely within user's browser, then I think vast majority of the apps out there can. Server side mostly is there to sync data between different instances of the app if the user uses it from different locations.
The tooling for this is in the works, but is not yet mature. e.g Electric-SQL. Once these libraries are mature, I think this space will take off.
Serverless is mostly there to make money for Amazon and Azures of the world and will eventually go the way of the CGI.
WASM could succeed as well. But mostly in user's browser. Microsoft uses it today for C#/Blazor. But it isn't the correct approach as dotnet in browser will likely never be as fast as Javascript in the browser.
CGI empowers users and small sites. No one talks about it because you can't scale to a trillion add impressions a second on it. Serverless functions add 10 feet to Bazoz's yacht every time someone writes one.
Incidentally, I think that's why local-first didn't take off yet: it's difficult to monetize and it's almost impossible to monetize to the extent of server-based or server-less. If your application code is completely local, software producers are back to copy-protection schemes. If your data is completely local, you can migrate it to another app easily, which is good for the user but bad for the companies. It would be great to have more smaller companies embracing local-first instead of tech behemoths monopolizing resources, but I don't see an easy transition to that state of things.
Local first is what we had all throughout the 80s to 10s. It's just that you can make a lot more from people who rent your software rather than buy it.
It sucks for customers, though.
When people have an abo that cannot be quit every month it gives more financial security to the company.
Previously people would buy e.g. the creative suite from Adobe and then work with that version for many, many years to come
The cracked software was there to onramp teens into users. Adobe has burned this ramp and now no one under 14 uses it any more which is quite the change from when I was 14.
Works well local-first, and syncs with the cloud as needed. Flutter space lends itself very well to making local-first apps that also play well in the cloud.
In a way - yes - it's almost like it was before the internet, but mostly because other ways to distribute and run applications have become such a hassle, partly for security reasons, but mostly for gatekeeping reasons by the "platform owners".
On MacOS, the user facing model is still that you download an application, drop it in the Applications folder, and it works.
The sandbox is very very important, it is the reason I mostly do not worry about clicking random links or pasting random urls in a browser.
There are many apps that I would have liked to try if not for the security risk.
Yeah, but try that today (and even by 2010 that wouldn't work anymore). Windows will show a scare popup with a very hard to find 'run anyway' button, unless your application download is above a certain 'reputation score' or is code-signed with an expensive EV certificate.
> On MacOS, the user facing model is still that you download an application, drop it in the Applications folder, and it works.
Not really, macOS will tell you that it cannot verify that the app doesn't do any harm and helpfully offer to move the application into the trash bin (unless the app is signed and notarized - for which you'll need an Apple developer account, and AFAIK even then there will be a 'mild' warning popup that the app has been downloaded from the internet and whether you want to run it anyway). Apple is definitely nudging developers towards the app store, even on macOS.
Also the "big idea" is that those applications aren't portable. Now that primary computers for most people are phones, portable applications are much more important.
Many many ChromeOS (web based consumer OS) laptops are 4GB of ram. You do not want to try that with any normal OSes.
SerenityOS and Ladybird browser forked but until recently had a lot of overlap.
LG's WebOS is used on a range of devices, derived from the Palm Pre WebOS released in 2009.
The gigantic special OS is baggage which already has been cut loose numberous times. Yes you can run some fine light Linux OS'es in 4GB but man, having done the desktop install for gnome or kde, they are not small at all, even if their runtime is ok. And most users will then go open a web browser anyways. It's unclear to me why people clutch to the legacy native app world, why this other not-connected mode of computing has such persistent adherency to it. The web ran a fine mobile OS in 2009; Palm Pre rocked. It could today.
Not to mention, from the perspective of a developer, the relative simplicity of native apps. Why should I jump through all the hoops of distributed computing to, for example, edit a document in a WYSIWYG editor? This is something I could do comfortably on a Packard Bell in 1992.
It's only JUST NOW we have truly portable UI frameworks. And it's only because of the Web.
You say that the web is portable, but really, only Google's vision for the web is relevant, seeing how they have the final say in how the standards are implemented and evolved.
So it's basically another walled garden, only much bigger and not constrained to the CPU architecture and OS kernel.
Chromium IS a platform. And indeed many applications that do work on Chrome don't work on Firefox. So we're pretty much back where we started, but the problem is harder to see because Chrome has such a monopoly over browsers that for most intents and purposes, and for most devs, it's the only platform that exists.
Everyone is good at multiplat when there's only one plat.
Web apps are popular because 1) people don't like installing things anymore for some reason and 2) it's easier to justify a subscription pricing model.
Want an offline app? Possible for a long time, build a local-first app. Don't want to build a client-server system? Fine, build an isolated webapps. There's so many great tools for webdev that get people going fast, that are incomparably quick at throwing something together. It's just bias and ignorance of an old crusty complainy world. This is a diseased view, is reprehensible small minded & aggressively mean, and it's absurd given how much incredibly effort has been poured into making HTML and CSS incredibly capable competent featureful fast systems, for shame: torturing a poor, overburdened document display engine into pretending it's a sane place for apps to run
The web has a somewhat earned reputation for being overwhelmed by ads, which slow things down, but today it feels like most native mobile apps are 60MB+ and also have burdensome slow ads too.
There aren't really any tries to go full in on the web. We have been kind of a second system half measure, for the most part, since Pre WebOS gave up on mobile (since FirefoxOS never really got a chance). Apps have had their day and I'm fine with there being offerings for those with a predeliction for prehistoric relics, but the web deserves a real full go, deserves a chance too, and the old salty grudges and mean spirits shouldn't obstruct the hopeful & the excited who have pioneered some really great tech that has both become the most popular connected ubiquitous tech on the planet, but which is also still largely a second system and not the whole of the thing.
The web people are always hopeful & excited & the native app people are always overbearingly negative nellies, old men yelling at the cloud. Yeah, there's some structural issues of power around the cloud today, but as Molly White's recent XOXO talk says, the web is still the most powerful system that all humanity shares that we can use to enrich ourselves however we might dream, and I for one feel great excitement and energy, that this is the only promise I see right now that shows open potential. (I would be overjoyed to see native apps show new promise but they feel tired & their adherents to be displeasurable & backwards looking) https://www.youtube.com/watch?v=MTaeVVAvk-c
My man, I am not a fossil. I came of age with web apps. But I am someone who has seen both sides. I have worked professionally on both desktop applications and as a full stack web developer, and my informed takeaway is web apps are insane. Web dev is a nightmarish tower of complexity that is antithetical to good engineering practice, and you should only do it if you are working in a problem space that is well and truly web-native.
I try to live by KISS, and nontrivial web apps are not simple. A couple of things to consider:
1. If it is possible to do the same task with a local application, why should I instead do that task with a web app that does everything in a distributed fashion? Unnecessary distributed computing is insane.
2. If it is possible to do the same task with a local application, and as a single application, not client-server, why should I accept the overhead of running it in a browser? Browsers are massive, complex, and resource hungry. Sure, I'll just run my application inside another complex application inside a complex OS. What's another layer? But actually, raw JS, HTML, and CSS are too slow to work with, so I'll add another layer and do it with React. But actually, React is also too slow to work with, so I'll add another layer and do it with Next.js. That's right, we've got frameworks inside of frameworks now. So that's OS -> GUI library -> browser -> framework -> framework framework -> application.
3. The world desperately needs to reduce its energy consumption to reduce the impact of climate change. If we can make more applications local and turn off a few servers, we should.
I am not an old man yelling at the cloud. I am a software engineer who cares deeply about efficient, reliable software, and I am begging, pleading for people to step back for a second and consider whether a simpler mode of application development is sufficient for their needs.
That's just your opinion, and you're overgeneralizing one framework as the only way.
A 2009 mobile phone did pretty damned awesome with the web. The web is quite fast if you use it well. Sites like GitHub and YouTube use web components & can be extremely fast & featureful.
Folks complain about layers of web tech but what's available out of box is incredible. And it's a strength not a weakness that there are many many ways to do webdev, that we have good options & keep refining or making new attempts. The web keeps enduring, having strong fundamentals that allow iteration & exploration. The Extensible Web Manifesto is alive and well, is the cornerstone supporting many different keystone styles of development. https://github.com/extensibleweb/manifesto
It's just your opinion again and again that the web so bad and ke, all without evidence. It's dirty shitty heresay.
Native OSes are massive, complex, and resource hungry and better replaced by the universal hypermedia. We should get rid of the extra layers of non-web that don't help, that are complex and bloated.
It is actually kinda funny to read cries about "enshitiffication" and praises for more web-based bullshittery on the same site, although both are clearly connected and supporting each other. Good material for studying false consciousness among dev proletariat.
Might be true, but both will be more than fast enough. We develop Blazer WASM. When it comes to performance, dotnet is not the issue
Still, you'll be coding your front-end with Wasm/Rust, so get in on the Rust train :)
However it’s likely that generations who weren’t making websites in the days of Matt’s script archive don’t even know about cgi, and end up with massive complex frameworks which go out of style and usability for doing simple tasks.
I’ve got cgi scripts that are over 20 years old which run on modern servers and browsers just as the did during the dot com boom.
Figma and other's work because they're mostly client-side applications. But I couldn't, for example, do that with a supply chain application. Or a business monitoring application. Or a ticketing system.
It depends on what you're actually building.
For the business applications I build SSR (without any JS in the stack, but just golang or Rust or Zig) is the future.
It saves resources which in turn saves money, is way more reliable (again: money) and less complex (again: money) to syncing state all the time and having frontend state diverge from the actual (backend) state.
Business applications don't care about client side resource utilisation. That resource has already been allocated and spent, and it's not like their users can decide to walk away because their app takes an extra 250ms to render.
Client-side compute is the real money saver. This means CSR/SPA/PWA/client-side state and things like WASM DuckDB and perspective over anything long-lived or computationally expensive on the backend.
Recently I wrote an .e57 file uploader for quato.xyz - choose a local file, parse its binary headers and embedded xml, decide if it has embedded jpg panoramas in it, pull some out, to give a preview .. and later convert them and upload to 'the cloud'.
Why do that ? If you just want a panorama web tour, you only need 1GB of typically 50GB .. pointclouds are large, jpgs less so !
I was kind of surprised that was doable in browser, tbh.
We save annotations and 3D linework as json to a backend db .. but I am looking for an append-only json archive format on cloud storage which I think would be a simpler solution, especially as we have some people self hosting .. then the data will all be on their intranet or our big-name-cloud provider... they will just download and run the "app" in browser :]
This doesn't follow. If Figma has the best of the best developers then most businesses might not be able to write just as complex apps.
C++ is a good example of a language that requires high programming skills to be usable at all. This is one of the reasons PHP became popular.
Unfortunately I got a little of a burnout for working some years on it, but I confess I have a more optimized and more to the point version of this. Also having to work on Chrome for this with all its complexity is a bit too much.
So even though is a lot of work, nowadays I think is better to start from scratch and implement the features slowly.
1 - https://github.com/mumba-org/mumba
The problem is: Figma and Linear are not local-first in the way people who are local-first proponents explain local-first. Both of them require a centralized server, that those companies run, for synchronization. This is not what people mean when they talk about "local-first" being the future, they are talking about what Martin Kleppman defined it as, which is no specialized synchronization software required.
I don't think it's unreasonable from a resources perspective to sync the posts/actions of mutual followers, and from a privacy standpoint it's not really any worse than your friend screenshotting a text message from you.
I would guess WASM is a big building block of the future of apps you imagine. Figma is a good example.
Sure, it allowed a large ecosystem, but holy crap is the whole JVM interface to the external world a clunky mess. For 20+ years I have groaned when encountering anything JVM related.
Comparing the packaging and ecosystem of Rust to that of Python, or shudder C++, shows that reinvention, with lessons learned in prior decades, can be a very very good thing.
You can't easily publish a library in WASM and link it into another application later. But you can publish it as C++ source (say) and compile it into a C++ application, and build the whole thing as WASM.
What are the scenarios where you really really want libraries in WASM format?
But those WASM plugins would be self-contained and wouldn't need to dynamically load other WASM 'DLLs', so that situation is trivial even without the WASM Component Model thingie (which I also think is massively overengineered and kinda pointless - at least from my PoV, maybe other people have different requirements though).
[0]: https://extism.org [1]: https://getxtp.com
XTP is the first (afaik) platform of its kind meant to enable an app to open up parts of its codebase for authorized outside developers to “push” wasm plugin code extensions directly into the app dynamically.
We created a full testing and simulation suite so the embedding app can ensure the wasm plugin code does what it’s supposed to do before the app loads it.
I believe this is an approach to integration/customization that exceeds the capabilities of Webhooks and HTTP APIs.
Or any other option. Really. So many investigations, so much time wasted.
>Wasm on the Server
>Why on earth are we talking about Wasm? Isn't it for the browser?
>And I really hope even my mention of that question becomes dated, but I still hear this question quite often so it's worth talking about. Wasm was initially developed to run high performant code in the web browser.
Not only JVM, also CLR, BEAM, P-Code, M-Code, and every other bytecode format since UNCOL came to be in 1958, but lets not forget about the coolness of selling WASM instead.
The other point is that WASM is way more open than any of the mentioned predecessors were. They were mostly proprietary crap by vendors who didn't give a shit (flash: security, Microsoft: other platforms) so inevitably someone else would throw their weight around (Apple) to kill them, and with good reason. WASM is part of the browser, so as a vendor you're actually in control regarding security and other things, and are not at the mercy of some lazy entity who doesn't give a damn because they think their product is irreplaceable.
And no, for reasons stated before an applet model would never become the standard again. You'd rather have to integrate Java with the browser so it's entirely under your control, and considering how massive it is and how hard it was to properly sandbox it, nobody in their right mind would decide on this. WASM reuses a lot of infrastructure already there, it's simply the best solution from a technical standpoint.
But Dillo works perfectly fine. No JS, no WASM, crazy fast on a n270 netbook.
I can't barely run WASM programs that could be run fine under a Pentium 3-4.
It's just a matter of having everybody agree to install the same interpreter, yes. That never happened before.
Never happened before, really?!?
What examples since 1958 would make you happy?
Burroughs, Corvus Systems, IBM, Apple, Unisys, MSR, embedded,....
Probably none of them, I bet.
And your list has no example of anything that was universally installed on everybody's system. The closest is IBM (if you mean x86 opcodes), but code for that one needed to be specialized by OS before it became ubiquitous, and got competitors before its main OS became ubiquitous, and then became ubiquitous again but with 2 main OSes, and then got competitors again.
Because of the sandbox nature of WASM technically it could even run outside an operating system or in ring0 bypassing a lot of OS overhead.
Compiling to WASM makes a whole range of deployment problems a lot simpler for the user and gives a lot of room for the hosting environment to do optimizations (maybe even custom hardware to make WASM run faster).
Massive scaling with minimal resources is certainly one important enabler. If you were, e.g., to re-architect wikipedia with the knowledge and hardware of today how would you do it with wasm (on both desktop and mobile). How about a massive multiplayer game etc.
On the other hand you have the constraints and costs of current commercial / business model realities and legacy patterns that create a high bar for any innovation to flurish. But high does not mean infinitely high.
I hate to be the person mentioning AI on every HN thread but its a good example of the long stagnation and then torrential change that is the hallmark of how online connected computing adoption evolves: e.g., we could have had online numerically very intensive apps and API's a long time ago already (LLM's are not the only useful algorithm invented by humankind). But we didnt. It takes engineering a stampede to move the lazy (cash) cows to new grass land.
So it does feel that at some point starting with a fresh canvas might make sense (as in, substantially expand what is possible). When the cruft accumulates sometimes it collapses under its own weight.
I hate WASM heavy websites as often they have bloat of javascript and site is very slow, especially during scrolling, zooming due to abuse of event listeners and piss poor coding discipline.
I kinda miss sometimes server rendered index.php
If you're generating bindings for some legacy disaster and shipping it to clients as a big WASM blob you're going to hell.
Currently it is a huge PITA to have to update and redeploy your AWS Lambda apps whenever a Node.js or Python version is deprecated. Of course, usually the old code "just works" in the new runtime version, but I don't want to have to worry about it every few years. I think applications should work forever if you want them to, and WASM combined with serverless like Lambda will provide the right kind of platform for that.
>Just in Time (JIT) compilation is not possible as dynamic Wasm code generation is not allowed for security reasons.
I don't follow -- is the Wasm runtime VM forbidden from JITing? (How could such a prohibition even be specified?) Assuming this is the case, I'm surprised that this is considered a security threat, given that TTBOMK JVMs have done this for decades, I think mostly without security issues? (Happy to be corrected, but I haven't heard of any.)
As far as model goes, the serverless one is not a different model. It is still a flavor of the CGI concept. But the underlying tech is different. And not that much. It is only serverless for you as a customer. Technically speaking, it runs on servers in micro-VMs.
Those are orthogonal matters, and even if such tech as the middleware mentioned get some wind, the execution model is still the same and is not new.
> One of the many effect of how [WASM] modules are isolated is that you can "pause" a module, and save its memory as a data segment. A similar concept to a Snapshot of a virtual machine. You can then start as many copies of the paused module as you like. (As I tell friends, it's like saving your game in an emulator.)
> The snapshotted module has no extra startup time ...
> If we go back to thinking about our Application Server models; this allows us to have a fresh process but without paying the startup costs of a new process. Essentially giving us CGI without the downsides of CGI. Or in more recent terms, serverless without cold starts. This is how Wasm is the new CGI.
https://codeberg.org/valpackett/caddy-wasm-wcgi
Sorry, I'll use this rare opportunity to bring up WCGI for Caddy. :-) It is a Caddy web server plugin that runs CGI applications compiled to Wasm, which includes scripting language runtimes. The project isn't mine, and I haven't tried it for anything beyond a "Hello, world!". I think it is a neat hack.
Bridging state across requests is not new. If "the new CGI" means more efficiently sharing state between requests, that's a really arbitrary qualifier and is not unique to WASM or serverless or anything like that. The article is myopic, it doesn't take into consideration what is established practice done over and over.
> If we go back to thinking about our Application Server models; this allows us to have a fresh process but without paying the startup costs of a new process. Essentially giving us CGI without the downsides of CGI. Or in more recent terms, serverless without cold starts. This is how Wasm is the new CGI.
^ It's not a frivolous claim.
> Wasm improves performance, makes process level security much easier, and lowers the cost of building and executing serverless functions. It can run almost any language and with module linking and interface types it lowers the latency between functions incredibly.
^ Not unreasonable.
I don't agree that its necessarily totally 'game changing', but if you read this article and you get to the end and you dont agree with:
> When you change the constraints in a system you enable things that were impossible before.
Then I'm left scratching my head what it was you actually read, or what the heck you're talking about.
> Serverless is mostly there to make money for Amazon and Azures of the world and will eventually go the way of the CGI.
There's... just no possible future, in which AWS and Azure just go away and stop selling something which is making them money, when a new technology comes along and makes it easier, safer and cheaper to it.
> I kind of like this variety of headline for it's ability to stimulate discussion but it's also nonsense. CGI can be any type of code responding to an individual web request, represented as a set of parameters. It has basically nothing to do with wasm
*shakes head sadly...*
...well, time will tell, but for alllll the naysayers, WASM is here to stay and more and more people are using it for more and more things.
Good? Bad? Dunno. ...but it certainly isn't some pointless niche tech that no one cares about is about to disappear.
CGI enabled a lot of things. WASM does too. The comparison isn't totally outrageous. It'll be fun to see where it ends up. :)
It's amazing how just one sentence can be so utterly wrong.
WSGI actually predates rack by several years: first WSGI spec was published in 2003 [0], rack was split from Rails in 2007 [1].
Flask is not an "application server", it is one of the web frameworks that implements WSGI interface. Another popular framework that also implements it is Django. Flask is not the first WSGI implementation, so I'm not sure why author decided to mention Flask specifically. It's probably one of the most popular WSGI implementations but there is nothing special about it, it hasn't introduced any new concepts or a new paradigm or anything like that.
I'm not sure if the rest of the article is even worth reading if the author can't even get the basic facts right but for some reason feels the need to make up total nonsense in their place.
[0] https://peps.python.org/pep-0333/
[1] https://github.com/rack/rack/blob/main/CHANGELOG.md
That doesn’t mean there weren’t good technical reasons, but that’s not necessarily the driver,
For example, ssl is obviously good, but ssl required also raises the cost of making a new site above zero, greatly reducing search spam (a problem that costs billions otherwise).
WASM is basically the new Microsoft Common Language Runtime, or the new JVM etc.
But OPEN!
https://en.wikipedia.org/wiki/Bytecode
I let chatgpt do the tedious work, have a look at a minimal example:
https://chatgpt.com/share/6707c2f3-5840-8008-96eb-e5002e2241...
For better or worse, browser APIs have been designed to be used with Javascript so some FFI magic needs to happen when called from other languages, with or without WASM.
And if each web API would automatically come with a C API specification (like WebGPU kinda does for instance), Rust people would complain anyway that they need to talk to an 'archaic' C API instead of a 'modern' Rust API etc etc...
Before WASM you could already compile code from other languages into JavaScript. And have the same benefits as you have with WASM.
The only benefit WASM brings is a bit faster execution time. Like twice the speed. Which most applications don't need. And which plain JavaScript offers about two years later because computers become faster.
And you pay dearly for being these two years ahead in terms of execution time. WASM is much more cumbersome to handle than plain JS when it comes to deployment, execution and debugging.
In IT we see it over and over again that saving developer time is more important than saving CPU cycles. So I think chosing WASM over plain JS is a net negative.
Sure, native JS is easier still. But there is a huge wealth of code already written in languages that are not JS. If you want a web app that needs this code, you'll develop it many times faster by compiling the pre-existing code to WASM than by manually rewriting them in JS, and the experience will be significantly better than compiling that code to JS.
I'm curious why you're taking the approach you describe, I think compiling entire GUI apps to WASM is the absolute worst thing, so clearly you have a different set of constraints on your work.
10k candles at 120 fps seems like you could absolutely do it in JS alone, though I suppose the app came first and wanting to deploy it to end users via a webpage is an afterthought. Tbh writing performant JS for something like this isn't fun so despite my comments to the contrary you're probably making the right choice here.
I think so too. I think everything we have is entirely possible to achieve in JavaScript but you're spot on, writing performant JS like this isn't fun and harder to maintain.
> Don't worry I'm ok without having my eyes burned out by the lack of proper subpixel AA on your fonts. :P
Fair fair. It is definitely happening, more noticeable in certain situations. :)
If you are referring to asm.js you must be joking. asm.js was basically a proof of concept and is worse in every way compared to WASM.
Like parsing time overhead alone makes it a non-option for most large applications.
You seem to imply you should just do it in plain JS instead for "deployment, execution and debugging" benefits. Imagine if you could be free to use those python ML libs in any language of your choice, that alone is enough of an argument. No one is going to reimplement them in JS (or any other environemtn) unless there is a huge ecosystem movement around it.
Look into the history of WASM. They did try compiling everything into JS with asm.js, but then sensibly decided to do things properly. I don't know why anyone would object to proper engineering.
That's all. All other aspects of the workflow are the same.
Yes javascript is very well optimized, but as someone who's spent a lot of time writing javascript where speed matters, it's not easy, and it's not predictable. You're at the mercy of arcane optimizations in V8 which might not work for your specific situation because you did something weird, and if you're taking a lot of care not to do anything weird, and manually managing your memory with typed arrays, well, then you might as well write C and compile to WASM.
If you use an algorithm that near exhausts memory, that's where you'll start seeing that "order of magnitude" difference between JS and something like C++. The same goes for Java and C#.
At low memory utilization, the GC can just put off collection, which saves execution time, so the runtime appears fast. But if you're close to the limit, then the GC has no choice but to pause often before continuing. Not very many algorithms will encounter this, but applications might, depending on what they do.
asm.js (the spiritual precursor to WASM) worked pretty much the same, and an awful lot of languages were compiled to it.
WASM does provide a more predictable compilation target to be sure, but I don't think it actually opens any new possibilities re what languages can be compiled.
It may be as you say that there are no new theoretical possibilities being opened by WASM, but to me it is a natural step forward to resolve inefficiencies and ergonomic problems in ASM.js and make it all less painful. And hopefully WASM won't be frozen in time either - the platform needs to keep improving to make more use-case scenarios practical.
[1] https://github.com/Kagami/webm.js/
For some of us it's much easier than dealing with Javascript though (for instance debugging C/C++ in Visual Studio is much nicer than debugging JS in Chrome - and that's possible by simply building for a native target, and then just cross-compile to WASM - but even the WASM debugging situation has improved dramatically with https://marketplace.visualstudio.com/items?itemName=ms-vscod...)
What actually WASM brings is predictable performance.
If you're JS wizard, you can shuffle code around, using obscure tricks to make current browser to run it really fast. The problem is: JS wizards are rare and tomorrow browser might actually run the same code much slower if some particular optimization changed.
WASM performance is pretty obvious and won't change significantly across versions. And you don't need to be wizard, you just need to know C and write good enough code, plenty of people can do that. Clang will do the rest.
I agree that using WASM instead of JS without reasons probably is not very wise. But people will abuse everything and sometimes it works out, so who knows... The whole modern web was born as abuse of simple language made to blink the text.