> Few of us want to be making manual HTTP calls out to APIs anymore. These days a great SDK, not a great API, is a hallmark, and maybe even a necessity, of a world class development experience.
I'm of the opposite mind. A simple API that I can just call with Fetch is far better than having to learn most SDKs. Stripe's Node.js SDK is fairly clean/without headaches, but I look at stuff like the AWS JS SDK and want to gouge my eyes out.
At least with an HTTP API, there's some modicum of standardization. SDKs are and will always be the wild west. The only exception with this that I see frequently is authentication. Some are simple Authorization headers w/ a Bearer <Token>, others (like PayPal the last time I implemented it) require an undocumented base64 encoding of the token.
HatchedLake721 21 days ago [-]
100%. Great API is a must have, great SDK is nice to have.
In my case SDK is an unnecessary abstraction.
I wrote 50+ SaaS app integrations from scratch and the last thing I want to do is bring over 50 dependencies rather than make simple HTTP requests.
rattray 21 days ago [-]
I worked on the API and SDKs at Stripe alongside Brandur, and honestly I was a little surprised to see him say that too.
I also think the underlying API has to be good for the overall DX to be good, even if these days most devs want static types and thus SDKs for any nontrivial API.
(Thanks for the kind words on the Stripe Node.js SDK btw - I do hope the world can standardize more around good SDK design patterns, just as with good REST design patterns!)
hattmall 21 days ago [-]
Seriously SDKs are unnecessary bloat. Just give the basic CURL examples and let people abstract it for whatever language and implementation they want.
halJordan 21 days ago [-]
What this leads to are developers turning to some random unofficial github code wrapping all your pci regulated transactions
colinclerk 21 days ago [-]
I wrote the tweet that Brandur quoted...
I think the bigger shifts are that developers use Checkout more, and complete more admin tasks in Stripe's Dashboard. By providing end-user UX (for both customers and backoffice), Stripe has reduced the API surface area that most developers need to consume. These changes end up having no impact on them.
hresvelgr 21 days ago [-]
I would go even further and say SDKs shouldn't exist and API developers should ship OpenAPI 3 specs so SDKs can be generated by users. OpenAPI tooling is pretty good at the moment.
simplesagar 21 days ago [-]
Companies hosting their API specs publicly is a great move but I think SDKs not officially maintained and endorsed by the company always suffer from lack of upkeep which eventually makes them more of headache to use than just hitting the API directly.
MarcelOlsz 21 days ago [-]
Almost every time I use an SDK I get burned. Happened for firebase, github, and a few others. Same with ORM's.
noitpmeder 21 days ago [-]
In what regard? The SDKs change out from underneath you?
csomar 21 days ago [-]
SDKs are just bad because they are unnecessary. 90% of the time they'll be doing something that you should be doing; and because of that they'll be doing a bad job. I don't want an SDK that starts processes, wants to read my env variables or do some other sh*t like that. The best SDK is a REST/GraphQL endpoint. After all I am just querying the database of the service. I'd like to do it on my terms.
MarcelOlsz 21 days ago [-]
It's either there isn't full coverage, or some convoluted way to migrate, or documentation is missing, or it's a 3rd party library by some guy in living in the mountains of Bhutan who happens to be on vacation at the moment, or any combination of these.
edoceo 21 days ago [-]
Not OP but yep. Forced SDK upgrade, dependency chain and vendor driven churn.
But, eg, Twilio been using the same HTTP methods for like 10 years. Many other SDK based interfaces (not necessarily Stripe) have forced that churn.
noitpmeder 21 days ago [-]
Trying to understand, wouldn't you see the same issues if you depended on an API that had a breaking change?
I guess my view is that shitty interfaces are shitty and people don't think enough about forwards/backwards compatibility, but it's not tied to a pure SDK or API problem
MarcelOlsz 21 days ago [-]
It adds an extra layer of maintenance. Instead of a breaking API change now you might also have an SDK that hasn't updated yet or is incompatible for any number of reasons, and you don't want to be embedded in it before you find out. It's just easier using a REST API.
csomar 21 days ago [-]
Most serious enterprises should have their API versioned.
hakfoo 21 days ago [-]
The drawbacks for SDKs I see:
* Needs to break the SDK containment. If you want to add some "off the menu" feature, you might be able to extend the SDK itself, butsuddenly you're running a fork that can't be maintained and updated with other dependencies.
* Potential for compatibility timebombs. If you use a deprecated language feature in the SDK, or a hard-coded certificate that expires 2028-01-01, it forces a full-scale upgrade event for developers rather than fixing a single API call in their own code. In the worst case, this triggers a whole avalanche of dependency updates and breaking unrelated code.
* Design impedance mismatches. Maybe you're writing procedural code and the SDK is object-oriented, or maybe it's just exposing a data structure paradigm that needs a lot of data reformatting and glue to work.
A good SDK is nice to have, and if done right can be a good reference implementation for the API too. But make sure there's a usable public API under there too.
kennu 19 days ago [-]
AWS SDKs handle some important stuff though, such as retrying database operations with backoff delays. Without it your app will fail in unexpected situations because the cloud service is designed to be stateless and return specific error codes when the client needs to retry.
uncomplexity_ 21 days ago [-]
im okay with it as long as it comes with conveniences, e.g. better type safety and better handling of errors.
but yes, end of the day there will still be edge cases that should work with plain http rest apis instead of some abstraction that isn't widely adopted.
swyx 21 days ago [-]
is there somewhere we can cosign this and get the message to @pc? not taking stuff like this seriously is how Stripe starts to die.
(also hi ryan, longtime, hope u are well)
rglover 21 days ago [-]
Hey swyx, hope you're doing well, too (shoot me an email if you see this: ryan.glover@cheatcode.co).
Would love it if someone could get Patrick's attention on this. They need to move back toward the Amber Feng era of API design [1].
Doesn't AWS have good reasons for its signature scheme stuff? XML is probably legacy but...
rglover 20 days ago [-]
My concern is with the actual JS SDK API. They shifted from a dirt-simple, chained methods API to a setup where you have to create class instances for everything (including creating config). To make it possible, they also had to shift to a naming scheme for each class that just makes the code unnecessarily clunky.
E.g., it used to look like this:
// This was callback-based but could have easily been wrapped with promises.
s3.putObject({ option: 'blah' }, () => {});
Now it looks like this:
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const client = new S3Client({ option: 'blah' });
const command = new PutObjectCommand({ option: 'blah' });
const response = await client.send(command);
---
It's improved now, but when this first dropped for 6-12 months the docs were just TypeScript auto-generated docs that were a nightmare to decipher (and a lot of stuff didn't even have examples, so you had to guess).
As for a "why this approach," it looks like they were trying to reduce the footprint of the SDK as a whole by breaking it into individual packages (smart) but instead of just keeping a simple API, they added all of this extra class instantiation and a funky client.send() pattern.
Why I prefer the above: it's predictable and obvious. You have a structure of <library>.<resource>.<method> which can cleanly map to HTTP methods/API endpoints. You could even (easily) have aliases for methods like s3.buckets.create() or s3.objects.upload() that just map to the canonical function.
stuartjohnson12 21 days ago [-]
> The new API is trying to move away from a model where subobjects in an API resource are expanded by default, to one where they need to be requested with an include parameter. We had plenty of discussions about this before I left.
This feels like the worst middleground between REST and GraphQL. All of the data flexibility of GraphQL with the static schemas of REST. Wasn't this kind of thing the whole idea underpinning GraphQL?
Maybe you can get around this with new SDK generators handling type safety for you, but I am definitely not looking forward to trying to understand the incomprehensible 5 layers of nested generics needed to make that work.
I remember looking up to Stripe as pioneers of developer experience. This reads like a PM with their back against the wall with a mandate from above (make requests n% faster) rather than a developer-first design choice made to help people build better systems.
echelon 21 days ago [-]
My team did this at Square too.
When you give everyone a grab bag of everything without asking them what they need, it takes longer to materialize the other entities from other caches and systems, especially in bulk APIs. Most of your callers don't even need or read this data. It's just there, and because you don't know who needs what, you can never remove it or migrate away.
By requiring the caller to tell you what it wants, you gain an upper hand. You can throttle callers that request everything, and it gives you an elegant and graceful way to migrate the systems under the hood without impacting the SLA for all callers. You also learn which callers are using which data and can have independent conversations, migrations, and recommendations for them.
Each sub-entity type being requested probably has a whole host of other systems maintaining that data, and so you're likely dealing with active-active writes across service boundaries, cache replication and invalidation, service calls, and a lot of other complexity that the caller will never see. You don't want the entire universe of this in every request.
It's a nightmare to have everything in the line of every request for simply legacy reasons. If you have to return lots of sub-entities for everyone all the time, you're more likely to have outages when subsystems fail, thundering herd problems when trying to recover because more systems are involved, and longer engineering timelines due to added complexity of keeping everything spinning together.
By making the caller tell you what they need, you quantitatively know which systems are the biggest risk and impact for migrations. This moves the world into a more maintainable state with better downstream ownership. Every request semantically matches what the caller actually wants, and it hits the directly responsible teams.
Stripe might also be dealing with a lot of legacy migrations internally, so this might have also been motivated as they move to a better internal state of the world. Each sub-entity type might be getting new ownership.
Grab bag APIs are hell for the teams that maintain them. And though the callers don't know it, they're bad for them too.
airstrike 21 days ago [-]
Sounds like boring code with lots of plumbing scores yet another point against magically flexible code claiming to handwave away complexity
echelon 21 days ago [-]
> against magically flexible code claiming to handwave away complexity
It might have just been scope creep over time that became a mountain of internal technical debt, data dependencies, and complexity. That's difficult to cleanly migrate away from because you can't risk breaking your callers. That's what it was in our case.
eYrKEC2 21 days ago [-]
I think it's the flexible middle-ground that REST APIs and GraphQL APIs converge on. GraphQL APIs that are completely open are trivially DOS'd with recursive data loops or deeply nested over-fetching requests and hence need to be restricted down to acceptable shapes -- thus converging on essentially the same solution from the opposite direction when constructing a GraphQL API.
tshaddox 21 days ago [-]
Don't most production-ready GraphQL servers have some sort of static query cost estimator that is intended to be hooked up to a rate limiter? At the bare minimum, it should be very easy to set up simple breadth+depth limits per request.
This doesn't seem meaningfully more complex than rate limiting a REST API, especially a REST API with configurable "includes."
dartos 21 days ago [-]
> trivially DOS'd with recursive data loops or deeply nested over-fetching requests
The depth of recursion can be limited in servers like Apollo.
Maybe “trivally easy to misconfigure”
eYrKEC2 21 days ago [-]
Yeah. First you limit the depth of recursion.
Then you limit which objects can nest which other objects, under which circumstances..
Pretty soon -- you have a proscribed set of shapes that you allow... and you've converged on the same solution as achieved in the other direction by the REST API requiring explicit data shape inclusion from the caller.
dartos 21 days ago [-]
That’s a slippery slope that I don’t think holds up.
When designing the schema, you keep performance and security in mind.
You need to do the same for REST APIs.
Just because some nodes don’t have edges that connect to some other nodes does not mean you’re back at REST.
The main benefit of graphql in not creating super rigid contracts between the frontend and the backend or between services is maintained.
tshaddox 21 days ago [-]
> and you've converged on the same solution as achieved in the other direction by the REST API requiring explicit data shape inclusion from the caller.
Yes, and with GraphQL you didn't have to invent your own way to represent the syntax and semantics in the query string, and you get to use the GraphQL type system and tooling.
rcaught 21 days ago [-]
> These days a great SDK, not a great API, is a hallmark, and maybe even a necessity, of a world class development experience.
IMO, you can't build a great SDK without a great API. Duct tape only goes so far.
resonious 21 days ago [-]
I agree with you.
I actually don't see any value-add from SDKs that wrap HTTP requests. HTTP is a standard, and my programming environment already provides a way to make requests. In fact it probably provides multiple, and your SDK might use a different one from what I do in the project, resulting in bloat. And for what gain? I still need to look at docs and try my best to do what the docs are telling me to.
Now if it's a statically typed language then I kinda get it. Better IDE/lsp integration and all. But even then, just publish an OpenAPI spec and let me generate my own client that's more idiomatic with my project.
skydhash 21 days ago [-]
This is one of the sentiment that powers the Common Lisp ecosystem. There's already good data structures and functions in the standard (and quasi standard) library, why do you need to invent new ones? In other languages (Node JS), you take a library and it brings a whole kitchen with it.
in-pursuit 21 days ago [-]
I agree with the sentiment that great APIs are a prerequisite to great SDKs, but great SDKs are really about time saving. Consider AWS's API, which requires a specific signing mechanism. That is annoying to implement manually. In general, the common method of shared-secret passed via bearer token is pretty insecure. I hope to see that change over time, and SDKs can help facilitate that.
buremba 21 days ago [-]
I like the middle ground where I generate the models from OpenAPI but stick to my preferred HTTP library of choice in the language I’m using.
noitpmeder 21 days ago [-]
But you _can_ put a good SDK in place to abstract away a terrible API.
I've done this at work to ease use for clients -- once they're happy with the SDK interface I can do whatever I want behind the scenes to shore up the API/backend without impacting those same clients and their OK SDK.
Gys 21 days ago [-]
As the saying goes 'if you cannot solve it with duct tape, you did not use enough duct tape' ;)
jsnell 21 days ago [-]
> There was a time not too long ago when Stripe cutting a new API version would’ve been a major event in the tech world, but in three months I didn’t come across a single person who mentioned it.
I don't think it would have been a major event, ever. Few people care about a API version bump in the abstract. It'll mostly be big news if it's bad news somehow (user-visible feature regressions, access restrictions, etc) or if there's some significant new features.
This appears to be neither. It's just inconsequential re-arranging of the living room furniture for better feng shui.
aftbit 21 days ago [-]
>That’s got to be true too. Few of us want to be making manual HTTP calls out to APIs anymore. These days a great SDK, not a great API, is a hallmark, and maybe even a necessity, of a world class development experience.
I may be in that "few of us" set because I really prefer a simple HTTP API that I can implement myself rather than having to take a library dependency on some code with transitive dependencies and unknown future maintainership. A good SDK is nice to have but a integrating with a bad one by accident is horrible.
stnderror 21 days ago [-]
There are a couple of changes more that arguably made the API worse:
* Event payloads changed to be “thin events”
* List endpoints became only eventually consistent
They also took away the “expand” parameter, which seems more useful that the new “include” one that works in some endpoints.
Generally their API design seems much more confusing and inconsistent nowadays. But to be fair their API is much bigger too. I guess it is hard to keep the same quality when you have so many engineers working on it.
rattray 21 days ago [-]
Eventually consistent list endpoints!? How did they justify/explain that? Do they provide guidance on how to work around it?
HatchedLake721 21 days ago [-]
Strange that https://jsonapi.org/examples/ never went into the masses, I suspect because it came out around the same time as GraphQL.
For me it’s the perfect mix between doing REST from scratch and GraphQL.
And it comes with the similar include and expand pattern that Stripe v2 introduced.
adsteel_ 21 days ago [-]
My experience with JSON API is that it doesn't come with the out-of-box tooling that GraphQL comes with. GraphQL is declarative enough that just writing the queries, mutations, and types gives you UI documentation and a GraphiQL playground. I still prefer the simplicity of a REST API, but GraphQL is clearly winning on the tooling front.
yasserf 21 days ago [-]
I feel that with typescript you can sort of build an SDK by creating a thin wrapper around fetch and have it consume the API types as a generic.
This way you still have the benefits of type safety without the bloat of creating an actual SDK.
SDKs tend to be useful when they hold state / don’t just proxy to a rest call, but I don’t see why we need wrapper libraries. They also hide away the complexity of knowing how the rest api is meant to consume data (query / params / body / encoding). So the assumption is the rest API itself makes sense.
I took this approach with a backend typescript framework called vramework.dev which can generate openapi specs as well as the thin fetch wrapper, and feel like for my projects satisfies my needs.
I'm of the opposite mind. A simple API that I can just call with Fetch is far better than having to learn most SDKs. Stripe's Node.js SDK is fairly clean/without headaches, but I look at stuff like the AWS JS SDK and want to gouge my eyes out.
At least with an HTTP API, there's some modicum of standardization. SDKs are and will always be the wild west. The only exception with this that I see frequently is authentication. Some are simple Authorization headers w/ a Bearer <Token>, others (like PayPal the last time I implemented it) require an undocumented base64 encoding of the token.
In my case SDK is an unnecessary abstraction.
I wrote 50+ SaaS app integrations from scratch and the last thing I want to do is bring over 50 dependencies rather than make simple HTTP requests.
I also think the underlying API has to be good for the overall DX to be good, even if these days most devs want static types and thus SDKs for any nontrivial API.
(Thanks for the kind words on the Stripe Node.js SDK btw - I do hope the world can standardize more around good SDK design patterns, just as with good REST design patterns!)
I think the bigger shifts are that developers use Checkout more, and complete more admin tasks in Stripe's Dashboard. By providing end-user UX (for both customers and backoffice), Stripe has reduced the API surface area that most developers need to consume. These changes end up having no impact on them.
But, eg, Twilio been using the same HTTP methods for like 10 years. Many other SDK based interfaces (not necessarily Stripe) have forced that churn.
I guess my view is that shitty interfaces are shitty and people don't think enough about forwards/backwards compatibility, but it's not tied to a pure SDK or API problem
* Needs to break the SDK containment. If you want to add some "off the menu" feature, you might be able to extend the SDK itself, butsuddenly you're running a fork that can't be maintained and updated with other dependencies.
* Potential for compatibility timebombs. If you use a deprecated language feature in the SDK, or a hard-coded certificate that expires 2028-01-01, it forces a full-scale upgrade event for developers rather than fixing a single API call in their own code. In the worst case, this triggers a whole avalanche of dependency updates and breaking unrelated code.
* Design impedance mismatches. Maybe you're writing procedural code and the SDK is object-oriented, or maybe it's just exposing a data structure paradigm that needs a lot of data reformatting and glue to work.
A good SDK is nice to have, and if done right can be a good reference implementation for the API too. But make sure there's a usable public API under there too.
but yes, end of the day there will still be edge cases that should work with plain http rest apis instead of some abstraction that isn't widely adopted.
(also hi ryan, longtime, hope u are well)
Would love it if someone could get Patrick's attention on this. They need to move back toward the Amber Feng era of API design [1].
[1] https://amberonrails.com/building-stripes-api
E.g., it used to look like this:
// This was callback-based but could have easily been wrapped with promises.
s3.putObject({ option: 'blah' }, () => {});
Now it looks like this:
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
const client = new S3Client({ option: 'blah' });
const command = new PutObjectCommand({ option: 'blah' });
const response = await client.send(command);
---
It's improved now, but when this first dropped for 6-12 months the docs were just TypeScript auto-generated docs that were a nightmare to decipher (and a lot of stuff didn't even have examples, so you had to guess).
As for a "why this approach," it looks like they were trying to reduce the footprint of the SDK as a whole by breaking it into individual packages (smart) but instead of just keeping a simple API, they added all of this extra class instantiation and a funky client.send() pattern.
The ideal would be something simple like this:
import s3 from '@aws/s3';
await s3.putObject({ bucket: 'my-bucket', body: '<Some Binary>', key: 'my_file.jpg', });
Or even better:
import s3 from '@aws/s3';
await s3.buckets.post({ name: 'my-bucket', region: 'us-east-2', });
await s3.objects.put({ bucket: 'my-bucket', body: '<Some Binary>', key: 'my_file.jpg', });
Why I prefer the above: it's predictable and obvious. You have a structure of <library>.<resource>.<method> which can cleanly map to HTTP methods/API endpoints. You could even (easily) have aliases for methods like s3.buckets.create() or s3.objects.upload() that just map to the canonical function.
This feels like the worst middleground between REST and GraphQL. All of the data flexibility of GraphQL with the static schemas of REST. Wasn't this kind of thing the whole idea underpinning GraphQL?
Maybe you can get around this with new SDK generators handling type safety for you, but I am definitely not looking forward to trying to understand the incomprehensible 5 layers of nested generics needed to make that work.
I remember looking up to Stripe as pioneers of developer experience. This reads like a PM with their back against the wall with a mandate from above (make requests n% faster) rather than a developer-first design choice made to help people build better systems.
When you give everyone a grab bag of everything without asking them what they need, it takes longer to materialize the other entities from other caches and systems, especially in bulk APIs. Most of your callers don't even need or read this data. It's just there, and because you don't know who needs what, you can never remove it or migrate away.
By requiring the caller to tell you what it wants, you gain an upper hand. You can throttle callers that request everything, and it gives you an elegant and graceful way to migrate the systems under the hood without impacting the SLA for all callers. You also learn which callers are using which data and can have independent conversations, migrations, and recommendations for them.
Each sub-entity type being requested probably has a whole host of other systems maintaining that data, and so you're likely dealing with active-active writes across service boundaries, cache replication and invalidation, service calls, and a lot of other complexity that the caller will never see. You don't want the entire universe of this in every request.
It's a nightmare to have everything in the line of every request for simply legacy reasons. If you have to return lots of sub-entities for everyone all the time, you're more likely to have outages when subsystems fail, thundering herd problems when trying to recover because more systems are involved, and longer engineering timelines due to added complexity of keeping everything spinning together.
By making the caller tell you what they need, you quantitatively know which systems are the biggest risk and impact for migrations. This moves the world into a more maintainable state with better downstream ownership. Every request semantically matches what the caller actually wants, and it hits the directly responsible teams.
Stripe might also be dealing with a lot of legacy migrations internally, so this might have also been motivated as they move to a better internal state of the world. Each sub-entity type might be getting new ownership.
Grab bag APIs are hell for the teams that maintain them. And though the callers don't know it, they're bad for them too.
It might have just been scope creep over time that became a mountain of internal technical debt, data dependencies, and complexity. That's difficult to cleanly migrate away from because you can't risk breaking your callers. That's what it was in our case.
This doesn't seem meaningfully more complex than rate limiting a REST API, especially a REST API with configurable "includes."
The depth of recursion can be limited in servers like Apollo.
Maybe “trivally easy to misconfigure”
Then you limit which objects can nest which other objects, under which circumstances..
Pretty soon -- you have a proscribed set of shapes that you allow... and you've converged on the same solution as achieved in the other direction by the REST API requiring explicit data shape inclusion from the caller.
When designing the schema, you keep performance and security in mind.
You need to do the same for REST APIs.
Just because some nodes don’t have edges that connect to some other nodes does not mean you’re back at REST.
The main benefit of graphql in not creating super rigid contracts between the frontend and the backend or between services is maintained.
Yes, and with GraphQL you didn't have to invent your own way to represent the syntax and semantics in the query string, and you get to use the GraphQL type system and tooling.
IMO, you can't build a great SDK without a great API. Duct tape only goes so far.
I actually don't see any value-add from SDKs that wrap HTTP requests. HTTP is a standard, and my programming environment already provides a way to make requests. In fact it probably provides multiple, and your SDK might use a different one from what I do in the project, resulting in bloat. And for what gain? I still need to look at docs and try my best to do what the docs are telling me to.
Now if it's a statically typed language then I kinda get it. Better IDE/lsp integration and all. But even then, just publish an OpenAPI spec and let me generate my own client that's more idiomatic with my project.
I've done this at work to ease use for clients -- once they're happy with the SDK interface I can do whatever I want behind the scenes to shore up the API/backend without impacting those same clients and their OK SDK.
I don't think it would have been a major event, ever. Few people care about a API version bump in the abstract. It'll mostly be big news if it's bad news somehow (user-visible feature regressions, access restrictions, etc) or if there's some significant new features.
This appears to be neither. It's just inconsequential re-arranging of the living room furniture for better feng shui.
I may be in that "few of us" set because I really prefer a simple HTTP API that I can implement myself rather than having to take a library dependency on some code with transitive dependencies and unknown future maintainership. A good SDK is nice to have but a integrating with a bad one by accident is horrible.
* Event payloads changed to be “thin events”
* List endpoints became only eventually consistent
They also took away the “expand” parameter, which seems more useful that the new “include” one that works in some endpoints.
Generally their API design seems much more confusing and inconsistent nowadays. But to be fair their API is much bigger too. I guess it is hard to keep the same quality when you have so many engineers working on it.
For me it’s the perfect mix between doing REST from scratch and GraphQL.
And it comes with the similar include and expand pattern that Stripe v2 introduced.
This way you still have the benefits of type safety without the bloat of creating an actual SDK.
SDKs tend to be useful when they hold state / don’t just proxy to a rest call, but I don’t see why we need wrapper libraries. They also hide away the complexity of knowing how the rest api is meant to consume data (query / params / body / encoding). So the assumption is the rest API itself makes sense.
I took this approach with a backend typescript framework called vramework.dev which can generate openapi specs as well as the thin fetch wrapper, and feel like for my projects satisfies my needs.
Curious, what are some other Stripe API blemishes that folks would have liked to see cleaned up in a v2?
What's the other way?
> These days a great SDK, not a great API, is a hallmark, and maybe even a necessity, of a world class development experience.