I love Zod, but recently I've been converting to Effect's Data and Schema modules.
Previously I liked a combination of Zod and ts-pattern to create safe, pattern matching-oriented logic around my data. I find Effect is designed far better for this, so far. I'm enjoying it a lot. The Schema module has a nice convention for expressing validators, and it's very composable and flexible: https://effect.website/docs/guides/schema/introduction
There are also really nice aspects like the interoperability between Schema and Data, allowing you to safely parse data from outside your application boundary then perform safe operations like exhaustively matching on tagged types (essentially discriminated unions): https://effect.website/docs/other/data-types/data#is-and-mat...
It feels extremely productive and intuitive once you get the hang of it. I didn't expect to like it so much.
I think the real power here is that these modules also have full interop with the rest of Effect. Effects are like little lazy loaded logical bits that are all consistent in how they resolve, making it trivial to compose and execute logic. Data and Schema fit into the ecosystem perfectly, making it really easy to compose very resilient, scalable, reliable data pipelines for example. I'm a convert.
Zod is awesome if you don't want to adopt Effect wholesale, though.
bjacobso 43 days ago [-]
I've had a very similar experience, and have been slowly moving from zod and ts-rest to @effect/schema and @effect/platform/HttpApi as well as migration to Effect Match from ts-pattern. There is a learning curve but its a pretty incredible ecosystem once you are in it for a bit.
I think the real turning point was typescript 5.5 (May 2024). The creator of typescript personally fixed a bug that unlocked a more natural generator syntax for Effect, which I think unlocks mainstream adoption potential.
I feel like Effect is today's Ramda. So cool but it's going to be regretted by you and people coming after you in few years.
Me and my team reverted to more stupid code and we are happier.
presentation 42 days ago [-]
My experience with fp-ts and io-ts was that we quickly got to a point where the team was divided into a small group of people usually with postgraduate CS degrees who really understood it, and then everyone else who saw it as black magic and were afraid to touch it.
Nowadays I’d rather rely on libraries that don’t require a phd to use them properly.
> Me and my team reverted to more stupid code and we are happier.
This is 100% how to write more reliable software. We are in the process of reducing our TS dependencies to effectively just express and node-postgres and everything is becoming infinitely easier to manage.
BillyTheKing 42 days ago [-]
Yes, all true - apart from treating errors as values and including them function signatures... That should simply be something every modern language should ship with
Sammi 42 days ago [-]
I have never written any code in Go, but increasingly I am writing my TS in the style I hear Go code is written in. Very procedural, very verbose. All of the fancy indirection you get with coding everything with higher order functions just makes the program impossible to debug later. Procedural style programming lends itself to debugging, and I definitely am so dumb I need to debug my own programs.
I may simply be too dumb for lots of fancy functional programming. I can barely understand code when reading one line and statement at a time. Reading functions calling functions calling functions just makes me feel like gravity stopped working and I don't know which way is up. My brain too small.
steve_adams_86 42 days ago [-]
I agree in some contexts. Kind of like Rust, I see a place for more durable code that's harder to reason about in some cases.
I wouldn't use Effect for a lot of things. For some things, I'm very glad to have it. One thing Effect has going for it that Ramda didn't is that it's much less abstract and it's quite a bit more opinionated about some more complex concepts like error handling, concurrency, or scheduling.
Kind of like state machines. You shouldn't use them for everything. For some things, it's a bad idea not to (in my opinion).
Then of course subjectivity is a factor here. Some people will never like conventions like Effect, and that's fine too. Just write what feels right.
Stoids 42 days ago [-]
I think going all-in on Effect in its current state is not something I'd do. However there's a subset of its functionality that I'm currently replicating with a bunch of different libraries: ts-pattern, zod, some lightweight result / option wrapper like ts-belt, etc. Pretty much trying to pretend I'm writing ML / OCaml. Having those all in one package is quite convenient. Add in TypeScript's the much needed story around retry / observability / error handling—I see why people lean into it.
Having experience with ZIO / FP in Scala, I'm a bit biased in seeing the value of Effect systems as a whole, but taking on the burden of explaining that mental model to team members and future maintainers is a big cost for most teams.
rashkov 42 days ago [-]
Coming from a ReasonML / OCaml codebase (frontend react), I'm seeing a lot to love with the pattern matching and sum types. Zod is already one of my favorites (coming from https://github.com/glennsl/bs-json).
Is 'retry / observability / error handling" something that comes from Effect?
Stoids 41 days ago [-]
That's right, Effect lifts all types to a lazily-evaluated common type and provides combinators to work with that type, similar to RxJS with Observables and its operators.
Retrying[0], observability[1], and error handling[2] are first-class concerns and have built-in combinators to make dealing with those problems quite ergonomic. Having these features is a non-starter for any serious application, but unfortunately, the story around them in the TypeScript ecosystem is not great—at least as far as coherence goes. You often end up creating abstractions on top of unrelated libraries and trying to smash them together.
I'm a big fan of ReasonML / OCaml, and I think the future of TypeScript will involve imitating many of its code patterns.
I liked the idea of Ramda until I saw code bases that where using it for everything.
I'm doing JS for over a decade now and I couldn't understand a thing.
MichaelArnaldi 42 days ago [-]
Don't judge Effect based on Rambda they have completely different objectives, Rambda focused a lot on abstractions similar to fp-ts, Effect focuses almost exclusively on concrete implementations
evilduck 42 days ago [-]
It's not the library per se, it's that the library will require all-or-nothing buy in from your entire development team for it to be useful and persistently maintained, similar to how Rambda affected a codebase and a development team.
It's the same effect as adding async code to Python or Rust, suddenly the entire team and the entire codebase (and often dependency choices) must adhere to it.
steve_adams_86 42 days ago [-]
One of the things I like about Effect is that incremental adoption is easy, and you really don't have to use it everywhere.
You can choose to make a single flow in your application an effect program. Or you can base most of your functions around it. It's really up to you how and where it's used. If you want to use an effect within non-effect code, that's easy to do, too.
You can think of effects like values. The value is obtained by executing the effect. Until it's called, the effect can be placed anywhere, in any function, in a generator, within promises, etc. Once you need its value, you execute it. It's compatible with most code bases as long as you can execute it to get the value. It's really up to the developer how portable they want their effects to be.
evilduck 42 days ago [-]
I understand, but this is very much like promises in JS. You can pass promises around without awaiting their return values too but async and/or promise code inevitably infects the rest of the codebase. I've never really seen anyone just have promises walled off in one specific module where the rest of the codebase sticks to callbacks.
Passing Effects around will similarly infect the entire codebase, resulting in the entire dev team who interacts with it needing to buy in. Limiting the output of Effects to a single module owned by one zealot dev undermines having it around in the first place and it'll get removed and replaced as soon as that person leaves or gives up the fight.
epolanski 42 days ago [-]
Effect is not only much easier to get productive on than Ramda, but it provides an entire ecosystem.
Our team is full effect from two years and juniors can pick it and start working on it with ease.
steve_adams_86 42 days ago [-]
Having references to work from is essential. Their documentation doesn’t do as good of a job demonstrating that as it could, in my opinion.
yunohn 42 days ago [-]
I’ve found that with most libraries - they always provide toy foobar style examples assuming it’ll make them approachable, but in reality, instead makes it impossible to understand the practical way to use it in real world settings.
epolanski 42 days ago [-]
That is quite of an hard problem to solve.
Solutions like effect are easier to appreciate as your application starts growing in complexity beyond simple todo apps.
Solutions like effect/schema are easier to appreciate as soon as you start needing complex types, encoding/decoding, branded types and more.
I am quite confident that effect will keep growing in popularity steadily and eventually grow.
It took more than 5/6 years for TypeScript or React to start getting spread around the JS community. Effect is here to stay and I'm confident it will eventually be adopted by plenty of developers.
hombre_fatal 42 days ago [-]
Reminds me of Animal / Cat / Dog examples.
For the love of god just use User / RegisteredUser / GuestUser and other abstractions that have some basis in the real world.
bjacobso 43 days ago [-]
that is certainly a possibility
dimal 42 days ago [-]
How did you find learning Effect? The sales pitch sounds great, but when I went through the docs it seemed pretty confusing to me. I’m sure there are reasons for the everything but I couldn’t grok it. In particular, I’m thinking of the Express integration example.[0] I look at that and think, I need all that just to create a server and a route? What’s the benefit there? I’m hesitant to buy into the ecosystem after looking at that. I want to like it, though.
I've been following the author Sandro Maglione for quite a while and am on his email list, he's great. He wrote fpdart which I've used and now he seems to be all in on Effect, with XState.
wruza 42 days ago [-]
That’s just another level. Only in five pages of explanations we come to something that is basically:
await fetch(…)
.catch(e =>
new CustomError(…))
But with a wrapped-promise and flatmap nonsense for “better error handling”.
FP always goes out of the way to avoid using the language it operates in and to criticize the ways of doing something it just imagined. As if it wanted to stay as noble from unwashed peasants as it could, but has to do the same job to keep existing.
how to test () => makePayment()? (from the link)
You don’t. You test constituents like request body generation and response handling. It’s inside. You can’t test your Effect-version of this code neither. It’s a strawman.
theschwa 42 days ago [-]
Yeah, looking at that example feels like jumping straight into the deep end of the pool. I think it helps going through a tutorial that breaks down the why of each piece. I really liked this tutorial on it: https://www.typeonce.dev/course/effect-beginners-complete-ge...
Some core things from Effect though that you can see in that Express example:
* Break things down into Services. Effect handles dependency injection, that's typed, for services so you can easily test them and have different versions running for testing, production, etc.
* Fibers for threaded execution
* Managing resources to make sure they're properly closed with scope
I think a lot of these things though often aren't truly appreciated until you've had something go wrong before or you've had to build a system to manage them yourself.
*
mewpmewp2 42 days ago [-]
But I feel like I've worked with massive systems with a lot going on where nothing has gone wrong that this sort of thing specifically would solve it. I think it would just increase learning curve and make people make other types of mistakes (business logic or otherwise) because it's so much less readable and understandable. I've seen similar libraries used in the past that have caused much more worse bugs because people misunderstand how they exactly work.
dimal 42 days ago [-]
Yeah, this looks like the tutorial I needed. Thanks.
bjacobso 42 days ago [-]
I agree, some of there examples are a little overly complicated by their quest to be hyper composable. In fact they should probably remove that example. I am currently using it with Remix, and using their @effect/platform package to produce a simple web handler (request: Request) => Response (thank remix for heavily promoting the adoption of web standards).
I fully agree parts of the ecosystem are complex, and likely not fully ready for broad adoption. But I do think things will simplify with time, patterns will emerge, and it will be seen as react-for-the-backend, the de facto first choice. effect + schema + platform + cluster will be an extremely compelling stack.
steve_adams_86 42 days ago [-]
I agree about the turning point. Things have improved dramatically. And I know it probably doesn't feel the same for tons of people, but I love to see generators being used in every day code.
The learning curve just about turned me away from it at the start, but I'm glad I stuck with it.
I think learning Effect would actually teach a lot of people some very useful concepts and patterns for programming in general. It's very well thought out.
williamcotton 42 days ago [-]
Why not write your code in F# and compile it to TypeScript using Fable [1]?
This way you can use native language features for discriminated unions, functional pipelines, and exhaustive pattern matching to model your domain instead of shoe-horning such functionality into a non-ML language!
Model your domain in F#, consume it in Python or C# backends and TypeScript frontends. The downside is needing to know all of these languages and run times but I think I'd rather know F# and the quirks with interacting with TypeScript than a library like Effect!
I've tried compile-to-JS languages before but their big weaknesses are:
1. Debugging can become quite a pain. Nobody likes debugging generated code.
2. You don't get to use libraries and tools from the enormous JavaScript ecosystem.
3. Eventually you'll find some web feature that they haven't wrapped in your language and then you're in for FFI pain.
In the end I found Typescript was good enough that it wasn't worth dealing with those issues.
williamcotton 41 days ago [-]
Fable is still very much integrated with the runtime so there’s an expectation to handle those bullet points with inline annotations.
You can build an entire application in F# and compile to JS but another option is compiling to TS and calling that F# code from your TS app. I/O and views and whatnot are written in TS and the domain model is in F#. The entire model could be nothing but pure functions and resolve to a single expression!
41 days ago [-]
steve_adams_86 42 days ago [-]
This is so cool! I’ll have to try it out soon. Thanks!
Aeolun 42 days ago [-]
I like the functionality, but the verbosity of the API makes me want to immediately ignore it. I feel like zod nailed the usability part.
epolanski 42 days ago [-]
Schema is a much powerful tool than Zod. Zod is merely a parser, while Schema has a decoder/encoder architecture.
skrebbel 42 days ago [-]
Like Java, you can write Haskell in any language but that doesn't mean it's always a good idea.
MichaelArnaldi 42 days ago [-]
Which is why our principle with Effect is NOT to port Haskell. For example we don't use Typeclasses which are big in haskell, we lean heavily on variance which is not in Haskell. It's an ecosystem though to write production-grade TypeScript, not Haskell in TypeScript
seer 42 days ago [-]
I’d love to hear more stories of people using Effect in production codebases.
It looks very similar in its ideas to fp-ts (in the “let’s bring monads, algebraic types etc to typescript” sense).
But I did hear from teams that embraced fp-ts that things kinda ground to a halt for them. And those were category theory enthusiasts that were very good scala devs so I’m sure they knew what they were doing with fp-ts.
What happened was that the typescript compile time just shot into minutes, for a moderately sized micro-service, without anything externally heavy being introduced like you could on the frontend.
It just turned out that Typescript compiler was not so great at tracking all the inferred types throughout the codebase.
So wonder if things have improved or effect uses types more intelligently so that this is not an issue.
golergka 42 days ago [-]
> It looks very similar in its ideas to fp-ts (in the “let’s bring monads, algebraic types etc to typescript” sense).
It's the next version of fp-ts, developed by the same people, AFAIK
aswerty 41 days ago [-]
Effect reminds me more of something like NestJS - essentially an end to end application framework that takes over your whole application.
Rather disappoint to see something like this being plugged as an alternative to something like zod which is a nice library that stays in its corner and has a nice fixed scope to it's functionality.
threatofrain 43 days ago [-]
This is just a link to the front page of possibly the #1 most popular type validation library in the ecosystem? Anyways, ya'll might want to check out up-and-coming Valibot, which has a really nice pipe API.
Valibot is really nice, particularly for avoiding bundle size bloat. Because Zod uses a "fluent builder" API, all of Zod's functionality is implemented in classes with many methods. Importing something like `z.string` also imports validators to check if the string is a UUID, email address, has a minimum or maximum length, matches a regex, and so on - even if none of those validators are used. Valibot makes these independent functions that are composed using the "pipe" function, which means that only the functions which are actually used need to be included in your JavaScript bundle. Since most apps use only a small percentage of the available validators, the bundle size reduction can be quite significant relative to Zod.
lolinder 42 days ago [-]
Is there a reason why tree shaking algorithms don't prune unused class members? My IDE can tell me when a method is unused, it seems odd that the tree shaker can't.
vbezhenar 42 days ago [-]
Because you can write `this[anything()]()` and it's impossible to analyze it. IDE false negative will not do anything bad, but treeshaker false negative will introduce a bug, so they have to be conservative.
aleclarsoniv 42 days ago [-]
That's not entirely true. Tree-shaking algorithms could have a “noDynamicAccess” option that errors on such use (only viable for applications, not libraries). Alternatively, the algorithm could be integrated with the TypeScript compiler API to allow dynamic access in some cases (e.g. where the `anything` function in your example only returns a “string literal” constant type or a union thereof, instead of the `string` type).
everforward 42 days ago [-]
Does that work for all class members? I think I've only ever seen that on private members, though I don't know whether that's because it's so much easier to check whether a private member is used or because an unused public member isn't an issue for eg a library.
This feels like an issue that reduces down to the Halting Problem, though. Halting is a function that could be made a member of a class, so if you could tell whether that method is used or not then you could tell whether the program will halt or not. I think it's one of those things that feels like it should be fairly easy, and it's really really not.
lolinder 42 days ago [-]
Comparing this to the halting problem isn't really meaningful here because even if you could make a full mapping (which yours isn't), you can prove that a rather large subset of programs halt, which is good enough for a tree shaker.
I don't need to be able to eliminate every single unused function in every situation, but if I can prove that certain functions are unused then I can delete just those functions. We're already doing this regularly with standalone functions, so my question is just why this isn't done with class members.
everforward 41 days ago [-]
Ah, I see your question now. Prototypes maybe? I’m not nearly a good enough JS dev to have a reasonable guess at that specifically.
Being able to access class members using square bracket syntax with a variable also seems like it would make it really difficult to prove that something isn’t used. I’m thinking something unhinged like using parameters to build a string named after a class member and then accessing it that way.
Dunno, I would be curious if someone has a definitive answer as well.
aleclarsoniv 42 days ago [-]
It requires flow analysis, which is really hard to get right. I don't think there's a tree-shaking library that uses the TypeScript compiler API for static analysis purposes. Maybe because it would be slow?
edit: The creator of Terser is working on flow analysis for his new minifier, according to him[1].
* Not the creator of Terser, but the lead maintainer
crooked-v 43 days ago [-]
Valibot also has much, much more efficient type inference, which sounds unimportant right up until you have 50 schemas referencing each other and all your Typescript stuff slows to a molasses crawl.
Escapado 42 days ago [-]
Not only that, runtime schema validation was also way faster in my last project for larger arrays of complex objects with lots of union types. Went from 400ms to 24ms all else being equal. Since our validation was running on every request for certain reasons this made a huge difference in perceived performance for our users and less load on our servers.
zztop44 42 days ago [-]
Agreed. I recently moved a production app from Zod to Valibot for exactly this reason. I still slightly prefer Zod’s syntax but the performance issues were absolutely killing us.
kaoD 43 days ago [-]
Does/will this correctly handle `| undefined` vs `?`, i.e. correct behavior under `exactOptionalPropertyTypes`?
Zod doesn't (yet[0]) and it's been a pain point for me.
Is Valibot's error handling any good? Zod's is… like punishment for a crime I wasn't aware that I had committed.
mightyham 43 days ago [-]
I'm currently using valibot and the author's associated modular form library in a production app and can vouch that it's been very pleasant to work with.
Timon3 43 days ago [-]
I recently stumbled onto this when looking for alternatives, and though it might sound a bit extreme: the style of documentation means I'll never consider it for any project. It's strange and honestly somehow unnerving to read documentation written from a first-person perspective. Is there some good reason for this that I'm missing? Or am I just crazy and this isn't an issue for anyone else?
Bjartr 43 days ago [-]
I only see second-person "you" and not first-person "I" in the linked documentation. Am I missing what you intended to point out?
In any case, this might actually be a good use for an LLM to post-process it into whatever style you want. I bet there's even a browser extension that could do it on-demand and in-place.
Timon3 43 days ago [-]
Really? For example under "Main concepts" on the "Schemas" site[0], I see stuff like this:
- I support the creation of schemas for any primitive data type.
- Among complex values I support objects, records, arrays, tuples as well as various other classes.
- For objects I provide various methods like pick, omit, partial and required.
- Beyond primitive and complex values, I also provide schema functions for more special cases.
Same for "Mental model", "Pipeline", "Parse data", "Infer types", "Methods" and "Issues" - I'll assume the other sections also follow this style. That's all not showing up for you?
While the LLM suggestion is nice, it's not something I'm comfortable with unless hallucinations are incredibly rare. Why would I use a library whose documentation I have to pass through an unreliable preprocessor to follow a normal style?
I honestly don't want my validation library to "tell a story" at the expense of documentation clarity. It's absolutely fine that this project uses it, I don't want to impose my view on them - I guess it's just not the validation library for me.
aswerty 41 days ago [-]
I don't think you are crazy at all. The concept of professionalism is about doing things a certain way to indicate to others that you are, of course, a professional. Writing technical documents in the 3rd person is a basic part of what I would consider professionalism in the broader engineering field.
In the software field you get a large portion of people that don't buy into the concept of professionalism. For various reasons - chiefly the hacker culture and the easy of contributing to the "field" means the gauntlet one runs to become a "professional" isn't inherently a given.
As a whole this is a good thing but it does mean if you operate as a "professional" maybe sometimes you have to realize that something doesn't exactly gel with your ethos (case in point). It doesn't mean it is bad; just maybe not for you and yours.
Timon3 41 days ago [-]
Thank you for sharing your perspective, I hadn't really thought about that angle. If a colleague asked me to explain my choice, I feel there'd be some appearance of being convinced by style over substance.
There's still more at play, since I really keep visually stumbling over those sentences, but that seems to be more related to me. And you're absolutely right that this doesn't make the project objectively worse - I wish them best of luck and hope their approach to documentation helps others!
kaoD 43 days ago [-]
> Or am I just crazy and this isn't an issue for anyone else?
Not an issue for me, to be honest. Why does it bother you at all?
Timon3 43 days ago [-]
I'm not exactly sure myself, but every time I open one of the pages, I mentally stumble over the formulations. I think there are two components:
1. The biggest part is that I've simply never seen documentation written in this style, any mentions of "I" or "we" are usually explaining the choices made by the author(s). When skimming documentation I pay more attention to those parts. Here those parts don't have a comparable meaning.
2. The smaller part is that the writing style reminds me of the way brands use mascots with first-person writing to advertise to children. There's not really any other association I have with this way of writing, and it makes me feel like the author either isn't taking the project seriously, or me.
I'm not trying to argue that the documentation should be understood this way, or that it should be changed - but I've stumbled over this multiple times, and can't imagine that it's just me.
skybrian 42 days ago [-]
It's a little too cute, but I don't see it as a deciding factor on which library to use.
I think a more important issue is that Valibot hasn't reached 1.0 yet. But it looks like it's very close.
wruza 42 days ago [-]
Yes, that's just cringe.
sergiotapia 43 days ago [-]
A whole lot of yapping and ZERO code on the homepage. Authors should remove 90% of the stuff on that landing page, jesus!
also since zod is the de facto validation lib, might be worth a specific page talking about why this vs zod. even their migration from zod page looks nearly identical between the two packages.
petesergeant 43 days ago [-]
On the front page there is a prominent section addressing the differences from Zod, which appear to be “much better performance”
I wonder about a Typescript with alternate design decisions.
The type system is cool and you can do a lot of compile time metaprogramming.
However, when there's a type error in computed types, it's a nightmare to visually parse
type { foo: ... , bar: ..., ... } can't be assigned to type { foo: ..., bar: ... }. type { foo: ... bar: ..., ... } is missing the following properties from { foo: ..., bar: ..., ... }
Apart from repeating roughly the same type 4 times (which is a UX issue that's fixable), debugging computed types requires having a typescript interpreter in your head.
I wonder if we had nominal runtime checked types, it could work better than the current design in terms of DX.
Basically, the : Type would always be a runtime checked assertion. Zod certainly fills that gap, but it would be nicer to have it baked in.
The type system would not be as powerful, but when I'm writing Kotlin, I really don't miss the advanced features of Typescript at all. I just define a data class structure and add @Serializable to generate conversions from/to JSON.
wrs 43 days ago [-]
TypeScript has a crazy powerful type system because it has to be able to describe any crazy behavior that was implemented in JavaScript. I mean, just take a look at @types/express-serve-static-core [0] or @types/lodash [1] to see the lengths TS will let you go.
If you write in TS to start with, you can use a more sane subset.
Right. I'm aware. My point was that even though the type system is powerful, somehow, I'm able to represent everything I need to in Kotlin's type system just fine and it feels a lot more type safe because it will throw a type error at runtime in the right place if I do a bad cast.
Typescript `as Foo` will not do anything at runtime, and it will just keep on going, then throw a type error somewhere else later (possibly across an async boundary).
You can, in theory use very strong lint rules (disallow `as` operator in favour of Zod, disallow postfix ! operator), but no actual codebase that I've worked on has these checks. Even the ones with the strictest checks enabled have gaps.
Not to mention, there's intentional unsoundness in the type system, so even if you wanted, you couldn't really create a save subset of TS.
Then there's the issue of reading the library types of some generic heavy code. When I "go to definition" in my fastify codebase, I see stuff like this
> = (
this: FastifyInstance<RawServer, RawRequest, RawReply, Logger, TypeProvider>,
request: FastifyRequest<RouteGeneric, RawServer, RawRequest, SchemaCompiler, TypeProvider, ContextConfig, Logger>,
reply: FastifyReply<RouteGeneric, RawServer, RawRequest, RawReply, ContextConfig, SchemaCompiler, TypeProvider>
// This return type used to be a generic type argument. Due to TypeScript's inference of return types, this rendered returns unchecked.
) => ResolveFastifyReplyReturnType<TypeProvider, SchemaCompiler, RouteGeneric>
Other languages somehow don't need types this complicated and they're still safer at runtime :shrug:
rimunroe 42 days ago [-]
> You can, in theory use very strong lint rules (disallow `as` operator in favour of Zod, disallow postfix ! operator), but no actual codebase that I've worked on has these checks. Even the ones with the strictest checks enabled have gaps.
That's surprising. I've worked on a few codebases with reasonably mature TS usage and they've all disallowed as/!/any without an explicit comment to disable the rule and explain why the use is required there.
everforward 42 days ago [-]
I don't write Kotlin, but what that does (assuming I'm guessing at it correctly) requires far more awkward code in most other languages. That looks like it will allow you to extend the types of objects deep inside the library so that you could e.g. create your own Request object without having to type cast inside the HTTP handlers or wrap the entire library.
That shifts the complexity of doing that out of the runtime and into the Typescript preprocessor where it's not going to mess with your production instances.
I also don't think it's all that bad; it's a lot of generic types, but it doesn't appear to be doing anything particularly complicated.
I do think they get awful, though. This is something I've been hacking on that I'm probably going to rewrite https://pastebin.com/VszX3MyE It's a wrapper around Electron's IPC and derives a type for the client from the type for the server (has to have the same methods and does some type finagling to strip out the server-specific types). It also dynamically generates a client based on the server prototype. The whole thing rapidly fell into the "neat but too complicated to be practical" hole.
dhruvrajvanshi 42 days ago [-]
> I don't write Kotlin, but what that does (assuming I'm guessing at it correctly) requires far more awkward code in most other languages
You're NOT assuming correctly. In Kotlin, this would be handled as an extension property on the Request type. You could write it just like normal code instead of extending some global ambient interfaces.
val Request.foo: Bar = whatever
get("/") { req.foo // just a method call }
You can CMD+Click on it and read the actual implementation (instead of generated type definitions).
The Typescript ecosystem needs these complicated types because of some design choices (no type based dispatch). I suggest looking up how other languages solve these problems. You'll find that in typescript, you have to reach for complex types far sooner than in other languages.
everforward 42 days ago [-]
So if I pass that Request option to another function that expects a Request object, the compiler is going to track that I’m using an extended object instead of the actual Request type in the function signature and allow me to access the “foo” property?
That’s what I’m specifically talking about. Yours is just dependency injecting a type, which is more avoiding the existing types in the library. That would be the “wrapping” option I was talking about. You don’t need to extend the types if you’re just going to dependency inject them. You could just have an entirely separate object that you pass around at that point.
dhruvrajvanshi 42 days ago [-]
If the other method has visibility of my foo property, then it can call it. Nothing is being "injected" anywhere. It's the moral equivalent of foo(req). It's statically dispatched
everforward 41 days ago [-]
That is exactly dependency injection, the functions have to be embedded into a scope that has access to additional values because the framework doesn’t support passing them directly. The types are injected into the function by being embedded into another scope.
It’s not some kind of moral sin, but it is a kludge. The type system is now tied to the structure of your code, because scoping is now intrinsic to your types.
It’s not the end of the world, I’ve worked in similar systems, it just tends to have a heavy mental overhead at some point as you now have to keep scoping in mind as part of your types.
wruza 42 days ago [-]
The complexity of the types you see in traditional nodejs web servers is there because all of them tried to work as a god-function multitool. The type of a router handler literally incorporates the types of all constituents of a generic request-reponse process and additional server-specific data, all packed into (req, res). This only happens in ultra-generic code that is basically web servers only.
Other languages (libraries really) separate these parts, e.g. Spring Boot seems to hide routing away into a method decorator which infers the body type from a target signature and the server is somehow implied(? through a controller?..). Anyway, it's all there, just not in one place. It has nothing to do with Typescript, it's a js library legacy issue.
presentation 42 days ago [-]
My company’s codebase disallows type coercion. You can disable the lint rule if there’s a legit reason to but then we ask to explain why.
42 days ago [-]
wrs 42 days ago [-]
It's apples and oranges. The type system in Kotlin is integrated into the language. TypeScript is a preprocessor for JavaScript that isn't allowed to change the JavaScript language, or have any runtime.
The "as Foo" construct is for you to tell TS that you know better than it, or that you deliberately want to bypass the type system. You can have a runtime check, but you have to write the code yourself (a type predicate), because TS doesn't write or change any JavaScript*, it just type-checks it.
I've certainly worked in new codebases where a relatively simple subset of TS types were used. Even then there were a few places the crazy type system was helpful to have. For example, we enforced that the names of the properties in the global configuration object weren't allowed to have consecutive uppercase letters!
(* with minor exceptions like transpiling for new JS features)
rlp 43 days ago [-]
This is my exact experience. TypeScript seemed to hit a complexity sweet spot about 5 years ago, then they just kept adding more obscure stuff to the type system. I find that understanding TypeScript compiler errors can be almost as difficult as understanding C++ template errors at times.
epolanski 42 days ago [-]
Hint, 9 times out of 10 you only need to read the last part of the error.
Also, there are many ways to make types opaque (not show their entire verbose structure).
trevor-e 42 days ago [-]
Which begs the question of why they don't rearrange that part to the first in the error message. Like you said, 9/10 times I scroll down without thinking which is silly.
epolanski 42 days ago [-]
I have no clue, but I guess there were (and are), good discussions around it.
phplovesong 43 days ago [-]
This. Haxe is a more sane TS alternative in 2024.
earslap 42 days ago [-]
Haxe is amazing, has macros etc. A force multiplier if you are a solo developer for sure. But damn, you feel kinda alone if you are using it. Not everything is an npm install away which negates your velocity gains from using a saner language.
phplovesong 42 days ago [-]
I RARELY/NEVER have to build an app so fast that i just fart out whatever broken code as fast as my fingers can type. IF i get a project like this with a deadline of "yesterday" i politely just refuse. I will be wasting my personal time, and the clients time. And the result will be a broken mess that will eventually take more time to fix, than it would have if i in fact did it "correct" from the get go.
That said Haxe has externs, enabling you to target JS/PHP and use the rich ecosystem both langauges have. The best part of externs is that IF i only use 4 things from given package, i statically KNOW i only use these 4 things, and can refactor more easily, or even build the thing i need myself.
earslap 42 days ago [-]
When I'm talking about velocity, I'm not talking about coding fast, but being able to write DRY, flexible yet easily maintainable code that can weather future requirements / refactorings easily. Personally, I'm also talking in the context of my own projects so nobody is breathing down my neck or pressuring me with time. I just want to write good code that is a joy to maintain for years to come.
pier25 42 days ago [-]
Ironically Haxe was inspired by ActionScript 3 which was the basis for the EcmaScript 4 proposal which was abandoned back in 2008 or so.
jadbox 42 days ago [-]
It's Haxe still been actively developed? I loved it back in the day. The blog hasn't had an update in years.
phplovesong 42 days ago [-]
It is. And only getting better SLOWLY by each iteration. The thing i love about Haxe that once you use it, your installed version is not legacy in a month (unlike npm/typescipt ecosystem). Haxe is fully working, and does not need a lot of "new" features.
More news about Haxe can be found here: https://haxe.io/ (the old blog is not updated AFAIK)
spencerchubb 43 days ago [-]
Couldn't this be improved by showing a diff between the mismatched types?
dhruvrajvanshi 43 days ago [-]
Yes, I did mention that the UX of the type errors could be improved and probably should, but once you start getting into conditional types, and nested types (which may not be fully expanded when you see the error), it gets hairy.
mhh__ 42 days ago [-]
Can you do any compile time metaprogramming in typescript?
ts-morph provides an easy way to use the TypeScript Compiler API to view and edit the AST before compile. Once you get your head around the API, which has good examples but isn't thoroughly documented on the web.
Nope. For thins you pick Haxe (https://haxe.org/), it has an awesome macro system, and a state of the art compiler thats really, really fast.
mhio 38 days ago [-]
How does haxe modify a typescript project pre or post compile?
aleclarsoniv 42 days ago [-]
I'd recommend TypeBox[1] as an alternative, which has a runtime “compiler” for generating optimized JS functions from the type objects. It also produces a JSON schema, which can be useful for generating API docs and API clients if needed.
It also has a companion library[2] for generating TypeBox validators from TypeScript definitions, which I'm currently using in an RPC library I'm working on.
Man why doesn’t it have a cli tool?? Wouldn’t it be nice to :'<'>!typeboxify a selection.
kuon 42 days ago [-]
I'm a rust,zig and elixir developer and I had to work on a large typescript project that use zod.
I felt like I was being punished for everything. (Maybe some things were project specific so I am not saying this as an absolute) It's slow, syntax is horrible, error are obscure super long lines, it can compile and explode later (which is what elixir does, except elixir will happily restart and recover)...
Elixir is based on duck typing mostly, but it works very well because you just pattern match your data when you use it. Rust is very strict and can have cryptic errors, but as everything is baked in the language it is way easier to manage.
I am not saying this to be snob about typescript and JS, but I really felt pain when working with that ecosystem, and I wonder if I'm old and stupid or if those tools are really half baked and over complicated.
ARandumGuy 42 days ago [-]
My company uses zod to validate data for HTML requests, and I'd say it works pretty well for that. It lets us detail our data structure in zod, validate the completely untyped HTML data, and be confident that once we reach our actual code, the data is well typed.
Zod feels like a crutch for limitations in Javascript and Typescript. But I've found it to be a very useful crutch, and I wouldn't want to write a Typescript API without it.
joshribakoff 42 days ago [-]
>elixir will just recover
Thats not true. It just crashes the “sub” process and if the parent process spawns the sub process again with the same inputs its just going to crash again.
Are you aware you can also try/catch your errors in typescript?
The whole point of the library is to validate something at runtime so of course it is going to blow up. There are also API methods that simply return a boolean instead of crashing if it fails validation. You can then use type guarding and narrowing of the type.
ForHackernews 42 days ago [-]
Erlang/elixir is built around a "let it crash" model. The idea is that the system as a whole is robust, not individual processes.
evilduck 42 days ago [-]
An infinite loop of crashing isn't very useful though. "Let it crash" doesn't lead to a whole system being robust all by itself since you can also just say "let it throw exceptions" in just about any language. Erlang's whole-system robustness emerges when far more than just how you handle exceptions and crashes is implemented to support things that crash. It expects parallel processes and even redundant computing infrastructure to exist to handle those things that are crashing, and hopefully those redundancies behave in a way that doesn't also crash in the same way. Erlang only achieves its robustness because nobody really chooses Erlang without also knowingly buying into the entire setup that it requires to succeed. And if they don't then it won't actually achieve that robustness and succeed in that way.
The phone systems that Erlang's design emerged from naturally had these parallelism and distributed system properties that they could leverage and build on. Running Erlang on a single core SBC like to display virtual signage and limiting it to a single thread and not letting it have any redundancy in any way and then taking the approach of "let it crash" is not going to create a famously robust Erlang setup either, it's just going to create an Erlang-powered signage system that crashes and halts the same as any other runtime would. Erlang/OTP is a physical systems building and software design approach that you can't just put anywhere or bolt onto any arbitrary thing. You're not going to build an OTP-like Single Page Application because if you reliably crash the browser tab's process every time you start up, it's just going to keep crashing no matter how many times you refresh the page.
Dumble 42 days ago [-]
If your i.e. server process kills itself when validating data, that's not the Problem of ZOD. The violation failing is an expected case, that is the use case for such a library.
Stoids 43 days ago [-]
The fragmentation around runtime validation libraries is pretty crazy. The fact that half these comments mention some alternative library that mimics almost the exact API of Zod illustrates that.
It is filling a necessary shortcoming in the gradual typing of TypeScript, and using validator schema types to power other APIs generic inference is powerful. I am optimistic about an obvious leader emerging, or at least a better story about swapping between them more easily, but a bit annoying when trying to pick one to settle on for work that I have confidence in. That being said, Zod seems like the community favorite at the moment.
skybrian 42 days ago [-]
Yes, it's annoying. I share your optimism. This is how the JavaScript (and now TypeScript) community figures things out.
Note that TypeScript had competitors, too. It got better. Zod has an early lead and is good enough in a lot of ways, but I'm not sure it will be the one.
Perhaps someday there will be a bun/deno-like platform with TypeScript++ that has validation merged in, but it's probably good that it's not standardized yet.
leontrolski 43 days ago [-]
For those of you from a Python background - this is basically "Pydantic for Typescript". See also trpc - cross client/server typing using zod under the hood - https://trpc.io/
timpetri 43 days ago [-]
Looking around on Twitter and repos in the OSS community, it appears that Zod is now almost always favored over yup, despite an almost identical API. Curious to hear what people think if they've worked with both. We went with Yup early on at my company, and now that's what we use for consistency in our codebase. I haven't personally found it to be lacking, but some of the logic around nulls and undefined always lead me back to the docs.
roessland 43 days ago [-]
My company used Yup initially but we moved to Zod to be able to infer types from schemas. For example, API response payloads are Zod schemas. OpenAPI components are also generated from Zod schemas.
There are some performance issues, and WebStorm is struggling, which forced me over to VS Code.
But overall pretty happy.
kabes 43 days ago [-]
But Yup also allows to infer types from schemas...
roessland 42 days ago [-]
Sorry, might have mixed up Yup and Joi.
Anyways, I prefer Zod to both of them.
greener_grass 42 days ago [-]
Some tech influencers pushed Zod - that's basically the entire story
bjacobso 43 days ago [-]
zod is great, but I have been moving to @effect/schema and think it deserves a mention here. @effect/schema is the evolution of io-ts, which originally inspired zod.
It supports decoding as well as encoding, and fits natively into the rest of the effect ecosystem.
It does come with the cost of importing effect, so might not be the best in certain scenarios. However, there are smaller options than zod if that is a concern.
obeavs 43 days ago [-]
Yep. Effect is so cool. The ability encode AND decode is tomorrow's standard.
With all the work they're doing on durable workflows, hard to imagine that 2025 is anyone else's year.
dgellow 43 days ago [-]
Zod is fantastic, we use it pretty extensively at Stainless. Definitely one of my favorite JS library. Not calling it a parser combinator was a really good marketing move
n00bskoolbus 42 days ago [-]
Something cool that I can't remember if it was posted on HN at one point or I stumbled across when looking for alternatives to yup but this repo has been compiling a bunch of different benchmarks for runtime validation of ts validation libraries. Obviously to some degree the performance is arbitrary when you're reaching millions of operations per second but on the flipside their benchmarks are against rather data. Would be interested to see comparison of either more nested data or otherwise complex. Maybe something to look at in my spare time.
Can someone explain to me why I would need something like this over just hand rolling my own validation? Almost every time I've seen one of these JS validation libraries used, it was to verify a form with < 10 inputs and every one of those validations were no more complex than `isNonEmptyString = (x: string | undefined) => x != null && x.trim() != "")` or `isGtZero = (x: any): !isNaN(parseInt(x, 10)) && parseInt(x, 10) > 0`
This feels about as trivial as an aggregate package of `isOdd` / `isArray` absurd micropackages. Surely I must be missing something because they're incredibly popular?
_betty_ 40 days ago [-]
for me i can auto-gen basic validation from either my typescript or graphql type. Then fairly easily extend it manually.
yeah i could do it manually, but the integration with other tools was the killer.
kpmah 42 days ago [-]
I recently came across (but haven't yet used) Typia, which appears to let you do validation with regular TypeScript syntax: https://github.com/samchon/typia
disintegrator 42 days ago [-]
We build our TypeScript SDK code generator on Zod. So data coming from the user or API is validated and transform using generated Zod schemas. It's a fantastic library with some caveats specifically with Zod v3:
- Yes, it doesn't tree-shake well because of the chaining API. We accept this tradeoff because there's a lot of value coming from those kilobytes.
- In pure benchmarking numbers, it's nowhere near the fastest validator but I would wondering where you're using Zod and needing millions of operations per second. In my world, the network absolutely diminishes any performance benefits of the fastest validation library.
- It can slow down typescript type checking in extreme cases. This one is aimed at those folks that have a large number of very complex schemas. A good majority will not encounter this problem.
I say all this but on the flip side, I will pick Zod again and again because it has the biggest community behind and is well on its way to having ecosystem around e.g. with react-hook-form, trpc and other framework integrations. For most of my projects the trade-offs above don't materialize. Regarding all the performance and bundle size concerns, I've spoken to Colin, creator of Zod, on a couple of occasions about these and they're all getting addressed in Zod v4 which I'm raring to try out when it's available.
molszanski 43 days ago [-]
I very much enjoy more an alternative.
Sadly much less hyped
https://arktype.io/
threatofrain 43 days ago [-]
What do you like most about arktype?
ecuaflo 42 days ago [-]
When there’s feature parity, what’s the next differentiator for you? For me, performance.
Though I admit another important aspect is community adoption. If your 3rd-party dependency uses zod internally, well now you’re bundling in both, and the added network latency probably negates any performance improvement you were getting in a web app. That’s why I wish libraries would use something more generic that allows you to dependency-inject what you’re already using, like https://github.com/decs/typeschema
molszanski 42 days ago [-]
I think the homepage kinda sums it up. It just works and has OP performance, stability and quality
Cu3PO42 42 days ago [-]
Zod has been a great boon to a project I've been working on. I get data from multiple sources where strong types cannot be generated and checking the schema of the response has caught a great number of errors early.
Another feature that I use intensively is transforming the response to parse JSON into more complex data types, e.g. dates but also project-specific types. In some situations, I also need to serialize these data types back into JSON. This is where Zod lacks most for my use-case. I cannot easily specify bidirectional transforms, instead I need to define two schemas (one for parsing, one for serializing) and only change the transforms. I have added type assertions that should catch any mistakes in this manual process, but it'd be great if I didn't have to keep these two schemas in sync.
Another comment mentioned @effect/schema [0], which apparently supports these encode/decode relationships. I'm excited to try it out.
Thank you for the pointer! I will certainly consider TypeBox as well when the time comes to migrate.
epolanski 42 days ago [-]
I use schema extensively and I can tell you it hits the sweet spot for your use case. We have lots of similar use cases.
bearjaws 43 days ago [-]
One of the brilliant decisions of the AI SDK from Vercel was to use Zod.
It makes tool calling and chaining very robust, despite how finicky LLMs can be.
diggan 43 days ago [-]
As someone who never jumped onto the TypeScript hype-wagon, what is this for? Is this something like clojure.spec by for TypeScript so you can do runtime validation of data instead of compile-time validation?
I think the killer feature of Zod is type inference. Not sure if Joi has support for it yet but you can take a zod schema and wrap it in `z.infer` to get the typescript type.
Personally I use zod in my API for body validations, it's super nice to write the schema then just use `type Body = z.infer(schema)` to get the TS type to use inside the code.
threatofrain 43 days ago [-]
That's table stakes for this niche.
newaccountman2 43 days ago [-]
> Is this something like clojure.spec by for TypeScript so you can do runtime validation of data instead of compile-time
not really "instead", more like "in addition to". Even if your code compiles, if you are receiving data, e.g., via API, then you need to check that it actually conforms to the type/schema you expect. What is run is JS, so it, sadly, won't just crash/error if an object that is supposed to be of `type Cat = {name: string, ownerId: number}` lacks an `ownerId` at runtime.
Have you used Pydantic in Python? It's like that, but feels worse, IMO lol. I say this because Pydantic fits into writing Python code much more naturally than writing Zod stuff fits into writing TypeScript, IMO.
naberhausj 43 days ago [-]
I've not used either, but it appears to be similar to JOI, yes.
The main distinction is that ZOD allows you to extract a TypeScript type from your schema. This means you get both compile-time and run-time type checking.
VoidWhisperer 43 days ago [-]
Correct on the part of it being a runtime validation of data library (I can't as easily speak to whether or not it is similar to joi, never used it)
valbaca 43 days ago [-]
yep.
Typescript is build-time validation, but in the end TS is "just JS"
Zod provides runtime validation (and also works well with TS)
molszanski 43 days ago [-]
Deepkit.io has cool validations in runtime based on TS, but it is whole another level.
matt7340 42 days ago [-]
I really like the Decoders library for this. Similar in function to Zod, but a more Elm inspired approach - https://decoders.cc/
agluszak 42 days ago [-]
Could someone explain to me what problem exactly Zod solves?
Why do you need to do `type User = z.infer<typeof User>;` instead of declaring a class with typed fields and, idk, deriving a parser for it somehow? (like you would with Serde in Rust for example). I don't understand why Zod creates something which looks like an orthogonal type hierarchy.
For context: I come from the backend land, I enjoy strong, static typing, but I have very little experience with JS/TS and structural typing
tubthumper8 42 days ago [-]
> deriving a parser for it somehow
Serde in Rust does this with the Rust macro system, but TypeScript doesn't have a macro system. That's why people have to go the other way, the programmer defines the parser, then TypeScript can infer the type from the parser.
I have seen a library that invented their own macro system (a script that you configure to run before build, and it writes code into your node_modules directory), though I can't recall the name.
arzig 42 days ago [-]
There’s no macro system in TS that could analyze the type to build the parser. So, you work the other way and build the parser and then produce the type from that.
_heimdall 42 days ago [-]
Zod offers runtime type validation where typescript only does this at build time. You can also use it for data normalization, safely parsing date strings to Date objects for example.
42 days ago [-]
outlore 42 days ago [-]
The “type User =“ statement creates a TypeScript type from the zod schema, which can be useful when passing that definition around to functions
The schema object is useful for runtime validation, e.g. User.parse(). this is handy when validating payloads that come over the wire that might be untrusted. the output of the “parse()” function is an object of type User
you can kind of think of it like marshaling Json into a struct in Go :)
xmonkee 42 days ago [-]
The User object in your example is used to parse the data. Its the “somehow” part of your question. There is no way to go from a type to data in typescript (there is no runtime awareness of types whatsoever) so zod solves this by you writing the zod object and then deriving the type from it. Basically you only have to weite one thing to get the parser and the type.
cheriot 42 days ago [-]
I used it to validate data from config files matched the schema. I imagine it could be useful for other sources of suppose-to-be-structured data like an http body.
root_axis 43 days ago [-]
I prefer typebox because it uses JSON schema. As far I'm aware it's otherwise on par with Zod, but I might be unaware of some capabilities of Zod that typebox lacks.
psadri 43 days ago [-]
One thing to note about zod - it clones objects as it validates (parses) them. Eg z.array(z.object()) with an array of 10k objects will result in 10k cloned objects → slow.
What’s the go-to reason to use this over ajv? In particular, being rooted in JSON Schema feels like a pretty big win tooling-wise and interop-wise.
petesergeant 42 days ago [-]
You can reflect TS types out of it. There are 3rd party libraries to generate JSON Schemas from Zod objects, which is helpful if you have non-TS clients you want to support
shadowfiend 42 days ago [-]
Ajv has supported that for at least a couple of years afaik, and consumes JSON Schema natively which is good for consuming other APIs, not just feeding external clients—its base data format is interoperable, basically.
That’s mostly why I’m curious about the lack of mention :)
petesergeant 42 days ago [-]
You can’t reflect types out of Ajv. TS types are compile-time, Ajv consumes JSON schemas at runtime.
That seems to show that you have to bring your own types for JSON Schema still, as evidenced by their example both explicitly defining the interface and then passing that it as an argument.
I wasn’t aware however of JSON Type Definitions, which hadn’t been invented last time I released software with Ajv, but it does appear to be able to reflect those as well as validate from them, so thank you for showing me that.
shadowfiend 31 days ago [-]
Ah, I think I misunderstood you. Yes, this does mean that you need something else to define the typescript to json schema conversion—either by using another tool or by starting from json schema and getting to the typescript types you want.
Feels like it’s worth that trade off to have a consistent experience consuming other APIs as well, but I could be wrong; I think so far I’ve only used it when I need to consume APIs rather than produce them.
Timon3 42 days ago [-]
Personally a big factor: I haven't had the Zod creator scrape my email and send me a newsletter asking for money. That kind of soured me on ajv.
shadowfiend 42 days ago [-]
Ohp. That sounds pretty annoying. Was this a GitHub scrape of places using the library?
Timon3 42 days ago [-]
I'm not 100% sure, they most likely scraped the author emails of all NPM packages that (transitively) depend on ajv. Here's the GitHub issue from back then: https://github.com/ajv-validator/ajv/issues/1202
shadowfiend 41 days ago [-]
Appreciate the pointer!
Timon3 41 days ago [-]
No problem!
Just to make it explicitly clear, I only received one email - reading my earlier comment back, it made it seem like there maybe was more. It could have definitely been worse!
BenoitP 43 days ago [-]
And thus custom validation goes to json, completing a what is old is new again cycle. After XML/XSD, after CORBA.
vorticalbox 43 days ago [-]
At work I have used zod, myzod, joi though I have settled on class-validator as it ties in with nestjs really well.
aleksiy123 43 days ago [-]
Does anyone have a nice combination of tooling for typed handlers + client generation.
Thinking maybe Zod + Zod open API.
Really looking to replicate similar spec first workflow similar to protobuf services.
My tool https://openapi-code-generator.nahkies.co.nz/overview/about generates typed handlers based around koa (routing, req/res validation using zod) from openapi 3, as well as typed clients with optional zod validation using fetch/axios.
It also supports typespec using their transpilation to openapi 3 tooling
simplesagar 38 days ago [-]
Check out https://www.speakeasy.com/. We generate fully type-safe SDK from OpenAPI and for our TS offering we delegate the type checking to Zod.
From my understanding trpc is very similar, however, the rpc mechanism is not a standard. ts-rest produces openapi schemas and speaks REST over http, as well as a typed client.
That being said, I am actually slowly migrating off ts-rest and adopting @effect/schema and @effect/platform/HttpApi, I foresee this being the direction the typescript ecosystem heads in over the next few years. However, the APIs are not stable yet and it has a bit of a learning curve, so would not adopt lightly
AWebOfBrown 42 days ago [-]
I really wanted to adopt tRPC but the deal breaker was it being opinionated on status codes without allowing configurability. Because I needed to meet an existing API spec, that meant ts-rest was a better option. I think there's an aditional option with a native spec generator in frameworks like Hono, and maybe Elysia.
brap 43 days ago [-]
I always found it pretty awkward that you even need libraries like this. A limitation of TS I suppose.
paulddraper 42 days ago [-]
What language doesn't need libraries like this?!
Java - Jackson
Rust - serde
Python - marshmallow
etc
bitbasher 42 days ago [-]
Go
paulddraper 41 days ago [-]
True, Marshaling/Unmarshaling is part of the Go stdlib.
(Make sense, Go has arguably the largest stdlib of any language.)
bluepnume 42 days ago [-]
You find it weird that a type system doesn't do runtime validation? Is that common in many other languages?
dhruvrajvanshi 42 days ago [-]
Well in most statically typed languages with a VM (Java/C#), there's some sort of runtime validation
In Java
Object something = new Map();
String badCast = (Object) something; // This line would throw a ClassCastException because something is not a String
This has the advantage of throwing an exception in the correct place, instead of somewhere down the line.
yen223 42 days ago [-]
Other statically-typed languages do have to deal with the problem of parsing external objects. In my experience, none of them have parsers as good as Zod in terms of ergonomics.
dhruvrajvanshi 42 days ago [-]
> Other statically-typed languages do have to deal with the problem of parsing external objects.
Well that's just blatantly not true. Which languages are you thinking of? I'm sure I'm misunderstanding what you said.
I can't think of a single server side language that doesn't have to parse external untyped objects. That's where these serialization libraries come into play.
For example, in Kotlin, you declare a data class and mark it as @Serializable and it generateds `toJSON/fromJSON` for you. IMO it's a much better experience than Zod.
yen223 42 days ago [-]
> I can't think of a single server side language that doesn't have to parse external untyped objects.
That's what I said too.
> For example, in Kotlin, you declare a data class and mark it as @Serializable and it generateds `toJSON/fromJSON` for you. IMO it's a much better experience than Zod.
If the JSON object matches the data class exactly, the Zod parser and the Kotlin Serialization parser and Jackson and all those other JSON parsers are similar in complexity.
However, where Zod shines is if the JSON object doesn't match your domain class exactly, e.g. you want to parse a JSON number into a Date, or you want to parse a string field into some custom domain object. In those cases, in zod this is a one-liner with `.transform(...)`. Other libraries will require all kinds of weird workarounds to support this.
The other thing Zod does really well is composition, i.e. making new schemas out of existing schemas. Something like this is difficult to express in most language's parser frameworks:
const User = z.object({id: z.string(), username: z.string(), ...})
const CreateUserPayload = User.omit({id: true}) // Same as user, but without the id field
const UpdateUserPayload = CreateUserPayload.partial() // Same as CreateUserPayload, but now all the fields are optional
ashu1461 42 days ago [-]
I think static type inference can be a big win, considering that the typescript type is already a contract and defining the contract again for validation libraries (joi / zod) feels like an overkill.
Perhaps one day we will have properly working wasm stuff and we can forget about all these terrible because-typescript-sucks-because-js-sucks libraries.
hombre_fatal 42 days ago [-]
How does this have to do with a validation library? These exist for every language so that you can validate unknown data, e.g. user data coming over the wire or data from an external process.
anonzzzies 42 days ago [-]
A proper language would just compile that in; aka when something doesn't fit the struct, it dies. That's what zod does because typescript cannot; you can have lovely and complex types until you see blue in the face, but if they are runtime, they pass because it's js. That doesn't happen in other languages. Try some Rust or Haskell and try to stuff in via REST something that doesn't fit in the struct you defined. With typescript it happily continues unless you use all kinds of crap , like Zod, to validate again.
Why not make it so typescript has options to compile to zod etc with a flag for runtime?
I seriously don't understand why I spend time on writing this as the fanbois who never tried anything nice that doesn't need this think it's the best thing. You all loved left pad and this is no different.
progx 42 days ago [-]
Aha, and how you program wasm?
anonzzzies 42 days ago [-]
Wikipedia;
The main goal of WebAssembly is to facilitate high-performance applications on web pages, but it is also designed to be usable in non-web environments.[7] It is an open standard[8][9] intended to support any language on any operating system,[10] and in practice many of the most popular languages already have at least some level of support.
Previously I liked a combination of Zod and ts-pattern to create safe, pattern matching-oriented logic around my data. I find Effect is designed far better for this, so far. I'm enjoying it a lot. The Schema module has a nice convention for expressing validators, and it's very composable and flexible: https://effect.website/docs/guides/schema/introduction
There are also really nice aspects like the interoperability between Schema and Data, allowing you to safely parse data from outside your application boundary then perform safe operations like exhaustively matching on tagged types (essentially discriminated unions): https://effect.website/docs/other/data-types/data#is-and-mat...
It feels extremely productive and intuitive once you get the hang of it. I didn't expect to like it so much.
I think the real power here is that these modules also have full interop with the rest of Effect. Effects are like little lazy loaded logical bits that are all consistent in how they resolve, making it trivial to compose and execute logic. Data and Schema fit into the ecosystem perfectly, making it really easy to compose very resilient, scalable, reliable data pipelines for example. I'm a convert.
Zod is awesome if you don't want to adopt Effect wholesale, though.
I think the real turning point was typescript 5.5 (May 2024). The creator of typescript personally fixed a bug that unlocked a more natural generator syntax for Effect, which I think unlocks mainstream adoption potential.
https://twitter.com/MichaelArnaldi/status/178506160889445172... https://github.com/microsoft/TypeScript/pull/58337
Nowadays I’d rather rely on libraries that don’t require a phd to use them properly.
This is 100% how to write more reliable software. We are in the process of reducing our TS dependencies to effectively just express and node-postgres and everything is becoming infinitely easier to manage.
I may simply be too dumb for lots of fancy functional programming. I can barely understand code when reading one line and statement at a time. Reading functions calling functions calling functions just makes me feel like gravity stopped working and I don't know which way is up. My brain too small.
I wouldn't use Effect for a lot of things. For some things, I'm very glad to have it. One thing Effect has going for it that Ramda didn't is that it's much less abstract and it's quite a bit more opinionated about some more complex concepts like error handling, concurrency, or scheduling.
Kind of like state machines. You shouldn't use them for everything. For some things, it's a bad idea not to (in my opinion).
Then of course subjectivity is a factor here. Some people will never like conventions like Effect, and that's fine too. Just write what feels right.
Having experience with ZIO / FP in Scala, I'm a bit biased in seeing the value of Effect systems as a whole, but taking on the burden of explaining that mental model to team members and future maintainers is a big cost for most teams.
Is 'retry / observability / error handling" something that comes from Effect?
Retrying[0], observability[1], and error handling[2] are first-class concerns and have built-in combinators to make dealing with those problems quite ergonomic. Having these features is a non-starter for any serious application, but unfortunately, the story around them in the TypeScript ecosystem is not great—at least as far as coherence goes. You often end up creating abstractions on top of unrelated libraries and trying to smash them together.
I'm a big fan of ReasonML / OCaml, and I think the future of TypeScript will involve imitating many of its code patterns.
[0] https://effect.website/docs/guides/error-management/retrying
[1] https://effect.website/docs/guides/observability/telemetry/t...
[2] https://effect.website/docs/guides/error-management/expected...
I liked the idea of Ramda until I saw code bases that where using it for everything.
I'm doing JS for over a decade now and I couldn't understand a thing.
It's the same effect as adding async code to Python or Rust, suddenly the entire team and the entire codebase (and often dependency choices) must adhere to it.
You can choose to make a single flow in your application an effect program. Or you can base most of your functions around it. It's really up to you how and where it's used. If you want to use an effect within non-effect code, that's easy to do, too.
You can think of effects like values. The value is obtained by executing the effect. Until it's called, the effect can be placed anywhere, in any function, in a generator, within promises, etc. Once you need its value, you execute it. It's compatible with most code bases as long as you can execute it to get the value. It's really up to the developer how portable they want their effects to be.
Passing Effects around will similarly infect the entire codebase, resulting in the entire dev team who interacts with it needing to buy in. Limiting the output of Effects to a single module owned by one zealot dev undermines having it around in the first place and it'll get removed and replaced as soon as that person leaves or gives up the fight.
Our team is full effect from two years and juniors can pick it and start working on it with ease.
Solutions like effect are easier to appreciate as your application starts growing in complexity beyond simple todo apps.
Solutions like effect/schema are easier to appreciate as soon as you start needing complex types, encoding/decoding, branded types and more.
I am quite confident that effect will keep growing in popularity steadily and eventually grow.
It took more than 5/6 years for TypeScript or React to start getting spread around the JS community. Effect is here to stay and I'm confident it will eventually be adopted by plenty of developers.
For the love of god just use User / RegisteredUser / GuestUser and other abstractions that have some basis in the real world.
[0] https://effect.website/docs/integrations/express
FP always goes out of the way to avoid using the language it operates in and to criticize the ways of doing something it just imagined. As if it wanted to stay as noble from unwashed peasants as it could, but has to do the same job to keep existing.
how to test () => makePayment()? (from the link)
You don’t. You test constituents like request body generation and response handling. It’s inside. You can’t test your Effect-version of this code neither. It’s a strawman.
Some core things from Effect though that you can see in that Express example:
* Break things down into Services. Effect handles dependency injection, that's typed, for services so you can easily test them and have different versions running for testing, production, etc. * Fibers for threaded execution * Managing resources to make sure they're properly closed with scope
I think a lot of these things though often aren't truly appreciated until you've had something go wrong before or you've had to build a system to manage them yourself. *
I fully agree parts of the ecosystem are complex, and likely not fully ready for broad adoption. But I do think things will simplify with time, patterns will emerge, and it will be seen as react-for-the-backend, the de facto first choice. effect + schema + platform + cluster will be an extremely compelling stack.
The learning curve just about turned me away from it at the start, but I'm glad I stuck with it.
I think learning Effect would actually teach a lot of people some very useful concepts and patterns for programming in general. It's very well thought out.
This way you can use native language features for discriminated unions, functional pipelines, and exhaustive pattern matching to model your domain instead of shoe-horning such functionality into a non-ML language!
Model your domain in F#, consume it in Python or C# backends and TypeScript frontends. The downside is needing to know all of these languages and run times but I think I'd rather know F# and the quirks with interacting with TypeScript than a library like Effect!
[1] https://fable.io
1. Debugging can become quite a pain. Nobody likes debugging generated code.
2. You don't get to use libraries and tools from the enormous JavaScript ecosystem.
3. Eventually you'll find some web feature that they haven't wrapped in your language and then you're in for FFI pain.
In the end I found Typescript was good enough that it wasn't worth dealing with those issues.
You can build an entire application in F# and compile to JS but another option is compiling to TS and calling that F# code from your TS app. I/O and views and whatnot are written in TS and the domain model is in F#. The entire model could be nothing but pure functions and resolve to a single expression!
It looks very similar in its ideas to fp-ts (in the “let’s bring monads, algebraic types etc to typescript” sense).
But I did hear from teams that embraced fp-ts that things kinda ground to a halt for them. And those were category theory enthusiasts that were very good scala devs so I’m sure they knew what they were doing with fp-ts.
What happened was that the typescript compile time just shot into minutes, for a moderately sized micro-service, without anything externally heavy being introduced like you could on the frontend.
It just turned out that Typescript compiler was not so great at tracking all the inferred types throughout the codebase.
So wonder if things have improved or effect uses types more intelligently so that this is not an issue.
It's the next version of fp-ts, developed by the same people, AFAIK
Rather disappoint to see something like this being plugged as an alternative to something like zod which is a nice library that stays in its corner and has a nice fixed scope to it's functionality.
https://valibot.dev
This feels like an issue that reduces down to the Halting Problem, though. Halting is a function that could be made a member of a class, so if you could tell whether that method is used or not then you could tell whether the program will halt or not. I think it's one of those things that feels like it should be fairly easy, and it's really really not.
I don't need to be able to eliminate every single unused function in every situation, but if I can prove that certain functions are unused then I can delete just those functions. We're already doing this regularly with standalone functions, so my question is just why this isn't done with class members.
Being able to access class members using square bracket syntax with a variable also seems like it would make it really difficult to prove that something isn’t used. I’m thinking something unhinged like using parameters to build a string named after a class member and then accessing it that way.
Dunno, I would be curious if someone has a definitive answer as well.
edit: The creator of Terser is working on flow analysis for his new minifier, according to him[1].
[1]: https://github.com/terser/terser/issues/1410#issuecomment-17...
Zod doesn't (yet[0]) and it's been a pain point for me.
[0] https://github.com/colinhacks/zod/issues/635
[1]: https://github.com/sinclairzx81/typebox
In any case, this might actually be a good use for an LLM to post-process it into whatever style you want. I bet there's even a browser extension that could do it on-demand and in-place.
- I support the creation of schemas for any primitive data type.
- Among complex values I support objects, records, arrays, tuples as well as various other classes.
- For objects I provide various methods like pick, omit, partial and required.
- Beyond primitive and complex values, I also provide schema functions for more special cases.
Same for "Mental model", "Pipeline", "Parse data", "Infer types", "Methods" and "Issues" - I'll assume the other sections also follow this style. That's all not showing up for you?
While the LLM suggestion is nice, it's not something I'm comfortable with unless hallucinations are incredibly rare. Why would I use a library whose documentation I have to pass through an unreliable preprocessor to follow a normal style?
[0]: https://valibot.dev/guides/schemas/
I honestly don't want my validation library to "tell a story" at the expense of documentation clarity. It's absolutely fine that this project uses it, I don't want to impose my view on them - I guess it's just not the validation library for me.
In the software field you get a large portion of people that don't buy into the concept of professionalism. For various reasons - chiefly the hacker culture and the easy of contributing to the "field" means the gauntlet one runs to become a "professional" isn't inherently a given.
As a whole this is a good thing but it does mean if you operate as a "professional" maybe sometimes you have to realize that something doesn't exactly gel with your ethos (case in point). It doesn't mean it is bad; just maybe not for you and yours.
There's still more at play, since I really keep visually stumbling over those sentences, but that seems to be more related to me. And you're absolutely right that this doesn't make the project objectively worse - I wish them best of luck and hope their approach to documentation helps others!
Not an issue for me, to be honest. Why does it bother you at all?
1. The biggest part is that I've simply never seen documentation written in this style, any mentions of "I" or "we" are usually explaining the choices made by the author(s). When skimming documentation I pay more attention to those parts. Here those parts don't have a comparable meaning.
2. The smaller part is that the writing style reminds me of the way brands use mascots with first-person writing to advertise to children. There's not really any other association I have with this way of writing, and it makes me feel like the author either isn't taking the project seriously, or me.
I'm not trying to argue that the documentation should be understood this way, or that it should be changed - but I've stumbled over this multiple times, and can't imagine that it's just me.
I think a more important issue is that Valibot hasn't reached 1.0 yet. But it looks like it's very close.
also since zod is the de facto validation lib, might be worth a specific page talking about why this vs zod. even their migration from zod page looks nearly identical between the two packages.
I wonder if we had nominal runtime checked types, it could work better than the current design in terms of DX. Basically, the : Type would always be a runtime checked assertion. Zod certainly fills that gap, but it would be nicer to have it baked in.
The type system would not be as powerful, but when I'm writing Kotlin, I really don't miss the advanced features of Typescript at all. I just define a data class structure and add @Serializable to generate conversions from/to JSON.
If you write in TS to start with, you can use a more sane subset.
[0] https://github.com/DefinitelyTyped/DefinitelyTyped/blob/mast...
[1] https://github.com/DefinitelyTyped/DefinitelyTyped/blob/mast...
Typescript `as Foo` will not do anything at runtime, and it will just keep on going, then throw a type error somewhere else later (possibly across an async boundary).
You can, in theory use very strong lint rules (disallow `as` operator in favour of Zod, disallow postfix ! operator), but no actual codebase that I've worked on has these checks. Even the ones with the strictest checks enabled have gaps.
Not to mention, there's intentional unsoundness in the type system, so even if you wanted, you couldn't really create a save subset of TS.
Then there's the issue of reading the library types of some generic heavy code. When I "go to definition" in my fastify codebase, I see stuff like this
Which expands to this > = ( this: FastifyInstance<RawServer, RawRequest, RawReply, Logger, TypeProvider>, request: FastifyRequest<RouteGeneric, RawServer, RawRequest, SchemaCompiler, TypeProvider, ContextConfig, Logger>, reply: FastifyReply<RouteGeneric, RawServer, RawRequest, RawReply, ContextConfig, SchemaCompiler, TypeProvider> // This return type used to be a generic type argument. Due to TypeScript's inference of return types, this rendered returns unchecked. ) => ResolveFastifyReplyReturnType<TypeProvider, SchemaCompiler, RouteGeneric>Other languages somehow don't need types this complicated and they're still safer at runtime :shrug:
That's surprising. I've worked on a few codebases with reasonably mature TS usage and they've all disallowed as/!/any without an explicit comment to disable the rule and explain why the use is required there.
That shifts the complexity of doing that out of the runtime and into the Typescript preprocessor where it's not going to mess with your production instances.
I also don't think it's all that bad; it's a lot of generic types, but it doesn't appear to be doing anything particularly complicated.
I do think they get awful, though. This is something I've been hacking on that I'm probably going to rewrite https://pastebin.com/VszX3MyE It's a wrapper around Electron's IPC and derives a type for the client from the type for the server (has to have the same methods and does some type finagling to strip out the server-specific types). It also dynamically generates a client based on the server prototype. The whole thing rapidly fell into the "neat but too complicated to be practical" hole.
You're NOT assuming correctly. In Kotlin, this would be handled as an extension property on the Request type. You could write it just like normal code instead of extending some global ambient interfaces.
You can CMD+Click on it and read the actual implementation (instead of generated type definitions).The Typescript ecosystem needs these complicated types because of some design choices (no type based dispatch). I suggest looking up how other languages solve these problems. You'll find that in typescript, you have to reach for complex types far sooner than in other languages.
That’s what I’m specifically talking about. Yours is just dependency injecting a type, which is more avoiding the existing types in the library. That would be the “wrapping” option I was talking about. You don’t need to extend the types if you’re just going to dependency inject them. You could just have an entirely separate object that you pass around at that point.
It’s not some kind of moral sin, but it is a kludge. The type system is now tied to the structure of your code, because scoping is now intrinsic to your types.
It’s not the end of the world, I’ve worked in similar systems, it just tends to have a heavy mental overhead at some point as you now have to keep scoping in mind as part of your types.
Iow, don't go to the definition of a web request handler. Go here: https://fastify.dev/docs/latest/Reference/TypeScript/
Other languages (libraries really) separate these parts, e.g. Spring Boot seems to hide routing away into a method decorator which infers the body type from a target signature and the server is somehow implied(? through a controller?..). Anyway, it's all there, just not in one place. It has nothing to do with Typescript, it's a js library legacy issue.
The "as Foo" construct is for you to tell TS that you know better than it, or that you deliberately want to bypass the type system. You can have a runtime check, but you have to write the code yourself (a type predicate), because TS doesn't write or change any JavaScript*, it just type-checks it.
I've certainly worked in new codebases where a relatively simple subset of TS types were used. Even then there were a few places the crazy type system was helpful to have. For example, we enforced that the names of the properties in the global configuration object weren't allowed to have consecutive uppercase letters!
(* with minor exceptions like transpiling for new JS features)
Also, there are many ways to make types opaque (not show their entire verbose structure).
That said Haxe has externs, enabling you to target JS/PHP and use the rich ecosystem both langauges have. The best part of externs is that IF i only use 4 things from given package, i statically KNOW i only use these 4 things, and can refactor more easily, or even build the thing i need myself.
More news about Haxe can be found here: https://haxe.io/ (the old blog is not updated AFAIK)
ts-morph provides an easy way to use the TypeScript Compiler API to view and edit the AST before compile. Once you get your head around the API, which has good examples but isn't thoroughly documented on the web.
https://github.com/dsherret/ts-morph
It also has a companion library[2] for generating TypeBox validators from TypeScript definitions, which I'm currently using in an RPC library I'm working on.
[1]: https://github.com/sinclairzx81/typebox [2]: https://github.com/sinclairzx81/typebox-codegen
After doing a deep-dive comparison, I’m left wondering why to ever choose Zod over TypeBox.
https://github.com/dittofeed/dittofeed/blob/main/packages/is...
Man why doesn’t it have a cli tool?? Wouldn’t it be nice to :'<'>!typeboxify a selection.
I felt like I was being punished for everything. (Maybe some things were project specific so I am not saying this as an absolute) It's slow, syntax is horrible, error are obscure super long lines, it can compile and explode later (which is what elixir does, except elixir will happily restart and recover)...
Elixir is based on duck typing mostly, but it works very well because you just pattern match your data when you use it. Rust is very strict and can have cryptic errors, but as everything is baked in the language it is way easier to manage.
I am not saying this to be snob about typescript and JS, but I really felt pain when working with that ecosystem, and I wonder if I'm old and stupid or if those tools are really half baked and over complicated.
Zod feels like a crutch for limitations in Javascript and Typescript. But I've found it to be a very useful crutch, and I wouldn't want to write a Typescript API without it.
Thats not true. It just crashes the “sub” process and if the parent process spawns the sub process again with the same inputs its just going to crash again.
Are you aware you can also try/catch your errors in typescript?
The whole point of the library is to validate something at runtime so of course it is going to blow up. There are also API methods that simply return a boolean instead of crashing if it fails validation. You can then use type guarding and narrowing of the type.
The phone systems that Erlang's design emerged from naturally had these parallelism and distributed system properties that they could leverage and build on. Running Erlang on a single core SBC like to display virtual signage and limiting it to a single thread and not letting it have any redundancy in any way and then taking the approach of "let it crash" is not going to create a famously robust Erlang setup either, it's just going to create an Erlang-powered signage system that crashes and halts the same as any other runtime would. Erlang/OTP is a physical systems building and software design approach that you can't just put anywhere or bolt onto any arbitrary thing. You're not going to build an OTP-like Single Page Application because if you reliably crash the browser tab's process every time you start up, it's just going to keep crashing no matter how many times you refresh the page.
It is filling a necessary shortcoming in the gradual typing of TypeScript, and using validator schema types to power other APIs generic inference is powerful. I am optimistic about an obvious leader emerging, or at least a better story about swapping between them more easily, but a bit annoying when trying to pick one to settle on for work that I have confidence in. That being said, Zod seems like the community favorite at the moment.
Note that TypeScript had competitors, too. It got better. Zod has an early lead and is good enough in a lot of ways, but I'm not sure it will be the one.
Perhaps someday there will be a bun/deno-like platform with TypeScript++ that has validation merged in, but it's probably good that it's not standardized yet.
There are some performance issues, and WebStorm is struggling, which forced me over to VS Code.
But overall pretty happy.
It supports decoding as well as encoding, and fits natively into the rest of the effect ecosystem.
https://effect.website/docs/guides/schema/introduction
It does come with the cost of importing effect, so might not be the best in certain scenarios. However, there are smaller options than zod if that is a concern.
With all the work they're doing on durable workflows, hard to imagine that 2025 is anyone else's year.
https://moltar.github.io/typescript-runtime-type-benchmarks/
This feels about as trivial as an aggregate package of `isOdd` / `isArray` absurd micropackages. Surely I must be missing something because they're incredibly popular?
yeah i could do it manually, but the integration with other tools was the killer.
- Yes, it doesn't tree-shake well because of the chaining API. We accept this tradeoff because there's a lot of value coming from those kilobytes.
- In pure benchmarking numbers, it's nowhere near the fastest validator but I would wondering where you're using Zod and needing millions of operations per second. In my world, the network absolutely diminishes any performance benefits of the fastest validation library.
- It can slow down typescript type checking in extreme cases. This one is aimed at those folks that have a large number of very complex schemas. A good majority will not encounter this problem.
I say all this but on the flip side, I will pick Zod again and again because it has the biggest community behind and is well on its way to having ecosystem around e.g. with react-hook-form, trpc and other framework integrations. For most of my projects the trade-offs above don't materialize. Regarding all the performance and bundle size concerns, I've spoken to Colin, creator of Zod, on a couple of occasions about these and they're all getting addressed in Zod v4 which I'm raring to try out when it's available.
Though I admit another important aspect is community adoption. If your 3rd-party dependency uses zod internally, well now you’re bundling in both, and the added network latency probably negates any performance improvement you were getting in a web app. That’s why I wish libraries would use something more generic that allows you to dependency-inject what you’re already using, like https://github.com/decs/typeschema
Another feature that I use intensively is transforming the response to parse JSON into more complex data types, e.g. dates but also project-specific types. In some situations, I also need to serialize these data types back into JSON. This is where Zod lacks most for my use-case. I cannot easily specify bidirectional transforms, instead I need to define two schemas (one for parsing, one for serializing) and only change the transforms. I have added type assertions that should catch any mistakes in this manual process, but it'd be great if I didn't have to keep these two schemas in sync.
Another comment mentioned @effect/schema [0], which apparently supports these encode/decode relationships. I'm excited to try it out.
[0] https://effect.website/docs/guides/schema/introduction
[1]: https://github.com/sinclairzx81/typebox
It makes tool calling and chaining very robust, despite how finicky LLMs can be.
Basically joi (https://joi.dev/api/?v=17.13.3) but different in some way?
Personally I use zod in my API for body validations, it's super nice to write the schema then just use `type Body = z.infer(schema)` to get the TS type to use inside the code.
not really "instead", more like "in addition to". Even if your code compiles, if you are receiving data, e.g., via API, then you need to check that it actually conforms to the type/schema you expect. What is run is JS, so it, sadly, won't just crash/error if an object that is supposed to be of `type Cat = {name: string, ownerId: number}` lacks an `ownerId` at runtime.
Have you used Pydantic in Python? It's like that, but feels worse, IMO lol. I say this because Pydantic fits into writing Python code much more naturally than writing Zod stuff fits into writing TypeScript, IMO.
The main distinction is that ZOD allows you to extract a TypeScript type from your schema. This means you get both compile-time and run-time type checking.
Typescript is build-time validation, but in the end TS is "just JS"
Zod provides runtime validation (and also works well with TS)
Why do you need to do `type User = z.infer<typeof User>;` instead of declaring a class with typed fields and, idk, deriving a parser for it somehow? (like you would with Serde in Rust for example). I don't understand why Zod creates something which looks like an orthogonal type hierarchy.
For context: I come from the backend land, I enjoy strong, static typing, but I have very little experience with JS/TS and structural typing
Serde in Rust does this with the Rust macro system, but TypeScript doesn't have a macro system. That's why people have to go the other way, the programmer defines the parser, then TypeScript can infer the type from the parser.
I have seen a library that invented their own macro system (a script that you configure to run before build, and it writes code into your node_modules directory), though I can't recall the name.
The schema object is useful for runtime validation, e.g. User.parse(). this is handy when validating payloads that come over the wire that might be untrusted. the output of the “parse()” function is an object of type User
you can kind of think of it like marshaling Json into a struct in Go :)
That’s mostly why I’m curious about the lack of mention :)
I wasn’t aware however of JSON Type Definitions, which hadn’t been invented last time I released software with Ajv, but it does appear to be able to reflect those as well as validate from them, so thank you for showing me that.
Feels like it’s worth that trade off to have a consistent experience consuming other APIs as well, but I could be wrong; I think so far I’ve only used it when I need to consume APIs rather than produce them.
Just to make it explicitly clear, I only received one email - reading my earlier comment back, it made it seem like there maybe was more. It could have definitely been worse!
Thinking maybe Zod + Zod open API.
Really looking to replicate similar spec first workflow similar to protobuf services.
https://typespec.io/ also looks promising but seems early.
It also supports typespec using their transpilation to openapi 3 tooling
If someone has tried both, can anyone share how it compares with tRPC[0]?
[0] https://trpc.io/
That being said, I am actually slowly migrating off ts-rest and adopting @effect/schema and @effect/platform/HttpApi, I foresee this being the direction the typescript ecosystem heads in over the next few years. However, the APIs are not stable yet and it has a bit of a learning curve, so would not adopt lightly
Java - Jackson
Rust - serde
Python - marshmallow
etc
(Make sense, Go has arguably the largest stdlib of any language.)
In Java Object something = new Map(); String badCast = (Object) something; // This line would throw a ClassCastException because something is not a String
This has the advantage of throwing an exception in the correct place, instead of somewhere down the line.
Well that's just blatantly not true. Which languages are you thinking of? I'm sure I'm misunderstanding what you said.
I can't think of a single server side language that doesn't have to parse external untyped objects. That's where these serialization libraries come into play.
For example, in Kotlin, you declare a data class and mark it as @Serializable and it generateds `toJSON/fromJSON` for you. IMO it's a much better experience than Zod.
That's what I said too.
> For example, in Kotlin, you declare a data class and mark it as @Serializable and it generateds `toJSON/fromJSON` for you. IMO it's a much better experience than Zod.
If the JSON object matches the data class exactly, the Zod parser and the Kotlin Serialization parser and Jackson and all those other JSON parsers are similar in complexity.
However, where Zod shines is if the JSON object doesn't match your domain class exactly, e.g. you want to parse a JSON number into a Date, or you want to parse a string field into some custom domain object. In those cases, in zod this is a one-liner with `.transform(...)`. Other libraries will require all kinds of weird workarounds to support this.
The other thing Zod does really well is composition, i.e. making new schemas out of existing schemas. Something like this is difficult to express in most language's parser frameworks:
Why not make it so typescript has options to compile to zod etc with a flag for runtime?
I seriously don't understand why I spend time on writing this as the fanbois who never tried anything nice that doesn't need this think it's the best thing. You all loved left pad and this is no different.
The main goal of WebAssembly is to facilitate high-performance applications on web pages, but it is also designed to be usable in non-web environments.[7] It is an open standard[8][9] intended to support any language on any operating system,[10] and in practice many of the most popular languages already have at least some level of support.