You can see this in venerable software which has lived through the times of "designing for the user" and is still being developed in the times of "designing for the business".
Take Photoshop, for example, first released in 1987, last updated yesterday.
Use it and you can see the two ages like rings in a tree. At the core of Photoshop is a consistent, powerful, tightly-coded, thoughtfully-designed set of tools for creating and manipulating images. Once you learn the conventions, it feels like the computer is on your side, you're free, you're force-multiplied, your thoughts are manifest. It's really special and you can see there's a good reason this program achieved total dominance in its field.
And you can also see, right beside and on top of and surrounding that, a more recent accretion disc of features with a more modern sensibility. Dialogs that render in web-views and take seconds to open. "Sign in". Literal advertisements in the UI, styled to look like tooltips. You know the thing that pops up to tell you about the pen tool? There's an identically-styled one that pops up to tell you about Adobe Whatever, only $19.99/mo. And then of course there's Creative Cloud itself.
This is evident in Mac OS X, too, another piece of software that spans both eras. You've still got a lot of the stuff from the 2000s, with 2000s goals like being consistent and fast and nice to use. A lot of that is still there, perhaps because Apple's current crop of engineers can't really touch it without breaking it (not that it always stops them, but some of them know their limits). And right next to and amongst that, you've got ads in System Settings, you've got Apple News, you've got Apple Books that breaks every UI convention it can find.
There are many such cases. Windows, too. And MS Word.
One day, all these products will be gone, and people will only know MBA-ware. They won't know it can be any other way.
rcarmo 35 days ago [-]
Personally, I think the web is 200% to blame. It has wiped out generations of careful UX design (remember Tog in Interface?) and very, very responsive and usable native (and mostly standard-looking) UI toolkits in favor of bland, whitespace-laden scrolling nightmares that are instantly delivered to millions of people through a browser and thus create a low-skills, high-maintenance accretion disc of stuff that isn't really focused on the needs of users--at least not "power" ones.
I know that front-end devs won't like this, but modern web development is the epitome of quantity (both in terms of reach and of the insane amount of approaches used to compensate for the browser's constraints) over quality, and I suppose that will stay the same forever now that any modern machine can run the complexity equivalent of multiple operating systems while showing cat pictures.
Earw0rm 35 days ago [-]
Front-end dev is optimising for a different set of constraints than HIG-era UIs.
Primarily that constraint is "looks good in a presentation for an MBA with a 30-second attention span". Secondarily "new and hook-y enough to pull people in".
That said...
HIG UIs are good at what they're good at. But there is an element of a similar phenomenon to how walled-gardens (Facebook most of all, but also Google, Slack, Discord..) took over from open, standards-based protocols and clients. Their speed of integration and iteration gave them an evolutionary edge that the open-client world couldn't keep up with.
Similarly if you look at e.g. navigation apps or recipe apps. Can an HIG UI do a fairly good job? Sure, and in ways that are predictable, accessible, intuitive and maybe even scriptable. But a scrolly, amorphous web-style UI will be able to do the job quicker and with more distinctive branding/style and less visual clutter.
Basically I don't think a standardised child-of-HIG formalised UI/UX grammar could keep up with the pace of change the last 10-15 years. Probably the nearest we have is Material Design?
mckn1ght 35 days ago [-]
> walled-gardens (Facebook most of all, but also Google, Slack, Discord..) took over from open, standards-based protocols and clients. Their speed of integration and iteration gave them an evolutionary edge that the open-client world couldn't keep up with
Seems to me to be a combination of things, none of which indicate that the new products are implicitly better than the old. The old products could’ve incorporated the best elements of the new. But there are a few problems with that:
- legacy codebases are harder to change, it’s easier to just replace them, at least until the new system becomes legacy. slack and discord are now at the “helpful onboarding tooltip” stage
- the tooling evolved: languages, debuggers, IDEs, design tools, collaboration tools and computers themselves all evolved in the time since those HIG UIs were originally released. That partially explains how rapidly the replacements could be built. and, true, there was time for the UX to sink in and think about what would be nice to add, like reactjis in chat
- incentive structures: VCs throw tons of money at a competing idea in hopes that it pays off big by becoming the new standard. They can’t do that with either open source or an existing enterprise company
Earw0rm 34 days ago [-]
I'm not arguing that the new products are better, just that they were evolutionarily successful.
I think the issue was less one of legacy codebases, and more that getting consensus on protocols and so on is _always_ slow and difficult, and that expands exponentially with complexity. And as the user-base expands, the median user's patience for that stuff drops. "What the hell is an SMTP Server and why should I care what my setting is" kind of stuff.
Meanwhile the walled gardens can deliver a plug-and-play experience across authentication, identity, messaging, content, you name it.
And this against a background of OS platforms (the original owners of HIGs) becoming less relevant vs Web2 property owners, and that strict content/presentation separation on the Web never really caught on (or rather, that JS single-pagers which violate those rules are cheaper and sexier). Plus a shift to mobile which has, despite Apple's efforts, never strictly enforced standards - to the extent that there's no real demand from users that apps should adhere to a particular set of rules.
Analemma_ 35 days ago [-]
I also think the web is 200% to blame, but for a different reason: ad-tech in general and Google+Apple in particular taught users that software should cost $0. Once that happened they didn't go back, and it torpedoed the ISV market for paid programs. You used to go to CompUSA and buy software on a CD for $300; that can't happen now. Which would be fine, except adware filled the revenue gap, which by necessity brought a new set of design considerations. Free-as-in-beer software fucked us over.
leidenfrost 35 days ago [-]
I was about to say the same thing.
It even happens in the FOSS world. Open Source theorists tell us all the time that "free" only means "free-as-in-freedom". That we can share the code and sell the builds.
But whenever someone actually wants to charge users money for their own FOSS apps, even if it's only a few bucks to pay for hosting and _some_ of the work, outraged users quickly fork the project to offer free builds. And those forks never, ever contribute back to the project. All they do is `git pull && git merge && git push`.
Maybe the Google+Apple move was a strategy against piracy. Or maybe it was a move against the FOSS movement. And maybe the obsession with zero-dollars software was a mistake. Piracy advocates thought they were being revolutionaries, and in the end we ended up with an even worse world.
sirjaz 35 days ago [-]
We need to get back to Native Software aka "Apps". As I have said before we have these powerful machines, with cheap storage. Why do I need to connect to a remote host through a bloated web interface just to read a document, it would be better, faster, and smaller locally.
mindslight 35 days ago [-]
When I was learning embedded design, the general rule of thumb was to aim for 10ms polling for user input, because human tolerance for delay is around 100ms so 10ms appears instantaneous. Then I see products like Nest come out, with big endcap displays at home improvement stores, and I'm like how do people not just immediately write this off as janky trash.
Then again maybe the extra lag (and jitter!) is gets a pass because its part of these products positioning themselves in the niche of "ask the controlling overlord if you may do something" rather than "dependable tool that is an extension of your own will".
mistrial9 35 days ago [-]
I support your point but Tog was a blowhard
rcarmo 35 days ago [-]
He wrote excellent, non-blowhardy books. I never met the gentleman.
mistrial9 35 days ago [-]
"liquid courage"
AshamedCaptain 35 days ago [-]
One example in MS Word is the ribbon. It is relatively recent invention, and when it was introduced, _at least_ they went to the effort of using telemetry to guess which features where actually often utilized versus which ones where not, and designed the ribbons in accordance.
Nowadays every new "feature" introduced in MS Word is just randomly appended to the right end of the main ribbon. As it is now you open a default install of MS Word and at least 1/3 of the ribbon is stuff that is being pushed to the users, not necessarily stuff that users want to use.
At least I can keep customizing it to remove the new crap they add, but how long until this customization ability is removed for "UI consistency" ?
scotty79 35 days ago [-]
> At the core of Photoshop is a consistent, powerful, tightly-coded, thoughtfully-designed set of tools for creating and manipulating images. Once you learn the conventions,[...]
Photoshop has a terrible set of conventions. I'd take Macromedia Fireworks any day of the week instead. But Adobe bought Macromedia and gradually killed Fireworks in 8 years due to "overlap in functionality" between it and 3 other Adobe products.
That action pretty much enabled enshi*tification of Photoshop which wrapped the terrible core of Photoshop with terrible layers of web and ads.
rcarmo 35 days ago [-]
I still run Fireworks under WINE in Fedora. The only real pain point is that the fonts are stupefyingly small in comparison to other apps due to the way modern GUIs and resolutions have grown (this is fixable, but not everywhere).
mistrial9 35 days ago [-]
Fireworks is great, still used today
DanielHB 35 days ago [-]
I think you have a bit of rose tinted glasses, I remember back in the day how much people complained about every feature being shoved into the title bar menus of word and photoshop. Eventually growing the menus too long with a bunch of features no one cared about and obscuring the actual useful ones.
rcarmo 35 days ago [-]
Yeah. This (and Fitt's Law) is actually why the ribbon UI came about. 80% of the most common stuff is right there, and you can customize or search for the rest.
ghaff 35 days ago [-]
Yeah, I actually saw a presentation at Microsoft's Mix (the web-oriented conference they ran for a while) about the ribbon. I never loved the ribbon but it was really an attempt to deal with all the features that maybe 1% (at most) of users ever utilized but that those (many different) 1%s REALLY cared about.
One of the reasons I like Google Workplace. I'm mostly not a power user these days so the simpler option set really works for me even if I very rarely run into something I can't quite do like I'd prefer.
keyringlight 35 days ago [-]
Even before the ribbon they had identified and were trying to deal with the problem. I think either office 2000 or xp had the 'traditional' drop down menus but would hide less-used items until you clicked a little ≫ expander at the bottom, the downside being that would change the layout of the menu a bit as less/most-used changed which works against learning what items are located where.
harrall 35 days ago [-]
Even if you were the 99% user, the ribbon solved the problem with toolbars.
Word had like 20 different toolbars. If you wanted to work with tables, you had to open the Table toolbar. If you wanted to work with WordArt, you opened the WordArt toolbar. You could be the 99% user and still end up with 10 toolbars open even if you only needed one button on a certain toolbar. On a small monitor, half your screen consisted of your document and the other half toolbars.
I can’t remember if you could customize toolbars but not a lot of people would spend the time to do that. I sure never did.
aleph_minus_one 34 days ago [-]
> I can’t remember if you could customize toolbars but not a lot of people would spend the time to do that. I sure never did.
You could, but the typical user only did few customizations (if any).
DanielHB 35 days ago [-]
Not only that, but in photoshop and word dialogs for features released in 1995 had different design conventions from dialogs from feature released in 1999. Not UI controls and stuff like that (everything was still using win32), but the design language itself. Like how to display things to user or how to align buttons and stuff like that.
If anything this kind of layout patterns are better today than they were back then. What the OP is complaining about was ALWAYS a problem for any long-lived software. But given how much _older_ some software is today it is no wonder it is way more noticeable.
Win95 had a bunch of Win3.1 settings dialogs just like Win11 still has a bunch of Windows98 setting dialogs (the network adapter TCP/IP configuration one comes to mind).
In Win11, the old-style settings dialogs are the good ones, that allow you to change the settings.
inurqubits 35 days ago [-]
Photoshop used a custom UI toolkit which automatically generated the dialogs layout based on a description of their contents (string field, boolean, color picker, ...)
the_third_wave 35 days ago [-]
> One day, all these products will be gone, and people will only know MBA-ware. They won't know it can be any other way.
Just like vinyl made a comeback 'real' software will come back as well, maybe running in an emulator in a browser. Yes, there will be copyright problems, yes there will be hurdles but if and when MBAware becomes the norm software will persevere. Free software for sure, commercial software most likely even if legally on shaky foundations. The tighter they grip, the more users will slip from their clutches.
patates 35 days ago [-]
> vinyl made a comeback
Well, at least one of us must be living in a bubble.
batch12 35 days ago [-]
I don't know about a full comeback, but it is really the only physical format I see for sale in major retail stores in the US (like Walmart, etc.)
The format had outpaced CDs recently [0], but I think due to a decline in CD sales too.
According to a study in 2023 half of vinyl buyers in the US doesn't have a record player. It seems like for a lot of people it's less of a music media and more a combination of supporting their favorite artists and home decor or collecting cards. My guess is that a majority of all vinyl buyers listen to a lot of digital music as well.
It went from 'nearly dead, scrap the factories' to 'we need more manufacturing capacity to supply the increasing demand'. Not a bubble, i don't do vinyl but I know several people who do. The same will happen to 'real software' once the current iteration of productivity tools can no longer be distinguished from advertisements for such, once you need to really dig down to do simple things but have "AI enhanced autoclippy" under every "button". It is just the way things go no matter the field: 'craft' beer, vinyl, sourdough, vintage whatever. In some cases it is actually rational, in others (vinyl) it is mostly driven by emotional factors. The 'craft software' revival would be an example of a rational move.
AndrewKemendo 35 days ago [-]
What a great description
We live the timeline where the Feringi were the ones who bootstrapped the Borg with their hyper-consumption-trade culture
All will be consumed into the financial market (Borg cube) eventually
dist-epoch 35 days ago [-]
> At the core of Photoshop is a consistent, powerful, tightly-coded, thoughtfully-designed set of tools for creating and manipulating images. Once you learn the conventions, it feels like the computer is on your side, you're free, you're force-multiplied, your thoughts are manifest.
It's funny that today there still isn't a free image editing software comparable with the Photoshop from 2000. Krita is close, but still cumbersome to use.
misnome 35 days ago [-]
I’m sure that it is a complete coincidence that CS6, the last before they moved to a subscription model, is the last time it was mostly nonsense-free
marsovo 35 days ago [-]
Indeed. The trouble with subscriptions is that you don't need to make the new version actually good enough to convince people to upgrade, you just need to make it not bad enough for people to abandon the subscription entirely.
I think the same thing happened to Windows and Office.
Having said that, there's probably another elephant in the room, namely the current generation that grew up with phones and tablets and didn't really learn to use traditional computers fluently
noduerme 35 days ago [-]
I would love to jettison Photoshop/Illustrator and just use Affinity. Illustrator has recently gone from taking 30 seconds to over 2 minutes to launch on my M1. It's an atrocity. But Adobe software is so entrenched in printing that, even though print media is only 10% or less of what I do these days, it would just be an endless headache to deal with file conversions to and from other designers, print shops and publishers. And anyway I expect Affinity will go the same way soon, now that they're owned by Canva.
rcarmo 35 days ago [-]
I'm surprised nobody mentioned GIMP in the 2 hours since you wrote this (I still use Fireworks inside WINE).
card_zero 35 days ago [-]
I'm not particularly surprised: who wants to try to hold up GIMP as an exemplar of a good interface? Maybe by arguing "it's not as bad as it was, and now it hardly sucks at all".
I'm an old Photoshop user who has GIMP now. I'd like to do a breakdown of everything that's wrong with its interface behavior, but analysing exactly how it does behave would be a major mission. There's something - several things - wrong with how it selects, moves, deselects, selects layers, and zooms, compared to what I expect for the workflow I try to have. Possibly this is just a matter of needing to learn new conventions, but possibly I have learned the GIMP conventions and they're just clunky.
Interesting, though, since this is organic, grass roots, free software interface crappiness, not the coercive corporate kind.
cladopa 35 days ago [-]
I have used Photoshop and GIMP a lot. I do not see how GIMP interface is that clunky.
It seems clear to me that most people get used to some interface and anything different is wrong. I saw that a lot with Windows users changing to Mac or linux interfaces. Anything 100% identical is "interface crappiness". Apple interfaces have always run circles around Microsoft's, that copied things from Apple without understanding the fundamentals or hiring proper designers.
My problem with GIMP was always that it was not as powerful for the professional, like the support for color spaces with lots of bits per channel. That looks solved now, although I had not time to test is personally.
On the other hand, you could program GIMP much easier with Script Fu (a Lisp dialect) with less restrictions, like with most open source software.
card_zero 35 days ago [-]
I did change from Mac to Windows shortly after the millennium, and it was fine. Window buttons were in different places, but that was trivial. Windows would minimize properly instead of the "windowshade" thing Mac had at the time (folding up into the titlebar), because Windows had a taskbar to minimize to, and that was better. I disliked the prevalence of installers (generally you could just drag an executable to another Mac and it would work) and I disliked the creepy labyrinth of user-hostile system files and system folders, which has only gotten worse. But that isn't really interface. I thought the start menu was stupid, but I could just ignore it, and I still ignore it today (Explorer all the way).
On the whole I didn't think there was much difference, apart from a general vibe of crassness on Windows, which had no clear cause. But that's switching from Classic Mac OS: you're probably talking about the new OSX BSD linuxy one. Besides, it would have been Win2K I switched to, which was one of the best iterations.
I used Linux for a while too, but that was XFCE, so again kind of samey. Mainly I remember constantly having to go through Sudo whenever I wanted it to do anything, that was the distinctive interface difference.
Color spaces, a closed book to me. I can't stand color spaces, they were always some sort of unwieldy mess of interest to other people. People who print things, maybe, I don't know. From my perspective, they caused a lot of pretentious confusion when people were trying to make images on computers for viewing on computers and for some reason didn't do the natural thing and think in three bytes of RGB.
Scripting, I hadn't even thought about, and I'll give you that one. Photoshop had a mechanism for recording actions and playing them back, and it produced a linear list of actions, with no control flow. I definitely wanted a better scripting mechanism.
PaulHoule 35 days ago [-]
A while back I was making red-cyan anaglyphs [1] for which you want to have a slider that lets you move the left and right channels relative to each other so you can put objects close to the plane of the screen/paper to minimize all sorts of problems such as Vergence-accommodation conflict.
I wrote a little tkinter program to do it because, not least, tkinter is in the Python standard library.
I found out my stereograms looked OK in tkinter but when I exported the images to the web there where ghosts. What I found out was that modern GUI frameworks do color management in the sense that they output (r,g,b) triples in the color space of your monitor, which is what you get when you take a screenshot on Windows. So if you have a wide gamut monitor you can do experiments where you have a (0, 192, 0) color in an image specified in sRGB and then find it was (16, 186, 15) in a screenshot. [3] Drove me crazy until I figured it out.
What was funny was that tkinter was so ancient that it didn't do any color correction, it just blasted out (0, 192, 0) when you asked for (0, 192, 0).
It also turns out to be a problem in printing, where the CMYK printer presents itself as having an RGB color space where the green is very saturated but never gets very bright, and sRGB green also has red in it, but if you attach the printer's color profile to an image you can force a saturated green. A lot of mobile devices are going in the Display P3 direction so you can get better results publishing files in that format.
That and a few other print projects (so easy to screw up a $150 fabric printing job) have gotten me to care a lot about color management.
[3] the native green on the monitor is more saturated than the sRGB green so it adds a little red and blue to make it look the smae!
card_zero 35 days ago [-]
Yup. Issues like that have trained me to fear and avoid anything that says "CMYK" or "gamut" or similar, not because they're beyond comprehension, but because for many use cases the whole thing is an unnecessary pantomime. And it's often silently switched on by default, waiting to screw things up for you. Blender does similar things - in order to get what you intended to be pure black produce 0x000000, and what you intended to be pure red produce 0x0000ff, you have to dig down somewhere - "post processing," I think, because they suffer from some fantasy of being Pixar - and switch something to "raw", and as I remember that isn't even the whole story and there's some other tweaks to make too, just to make black output black and red output red.
PaulHoule 35 days ago [-]
Last summer I looked a lot at "print on demand" services of which I'd call out
and something I found amusing was that almost all of them will only take files in sRGB format even though they actually support a gamut which covers some colors better than sRGB and other colors worse. My monitor supports Adobe RGB and covers both spaces pretty well.
I think, however, that they have more quality problems if people send files that have different color profiles so they just take sRGB.
I had some print jobs go terribly bad, turns out yellow flowers are out of gamut for my camera, for sRGB and CMYK, If you try to print something that has out of gamut colors, the printer will do something to put them into the gamut and you might not like it. I learned to turn on the gamut warning for Photoshop and bring the colors into the CMYK gamut before I print, even if I am sending in an sRGB file. It didn't bother me so much when I was printing 'cards' with my Epson ET-8550, but once I had orders come back ruined, I figured it all out.
Jach 35 days ago [-]
Indeed, especially because Krita is terrible for a lot of what I think of as "image editing"; I'd much rather use Gimp for work like that. Though I do quite like the recent AI stuff available to Krita at the moment; e.g. there's a plugin that lets you "object select" performed by AI, so e.g. you click on a person's shirt and it selects the shirt, or add other parts of the person (or draw a box) and get the person themselves, separate from the background. Or click on a bird, or speaker, or whatever. And you can use the ai-diffusion stuff to remove it, easier than the old heal tool techniques. (The selection is of course not perfect but a great compliment to the other selection tools that more or less overlap with Gimp's, but I prefer Gimp's knobs and behaviors after the selection. And I'm sure Photoshop has similar AI stuff by now, but I remember over the years it's seemed like a lot of stuff crops up in open source first. e.g. I think for quite a while "heal" / "smart patch" was just a script-fu plugin..)
I just appreciate that there are many options, and can talk to each other pretty well. If one becomes unusable, I have others, and sometimes there are newer ones. I did a stage banner project last December but I had Gimp, Krita, and Inkscape all open at the same time. (With a quick use of an old version of Illustrator in Wine to export into an illustrator template matching the dimension outlines and particular color space the printing company needed...)
Photoshop tried to be the everything tool, and it probably is and will continue to be the best kitchen sink (and if I knew it better and had a license, it probably could have sufficed by itself for my project), but for any specific thing there's going to be something else that's better for at least that thing (maybe even one of Adobe's other products, like Illustrator). Krita isn't competing with Photoshop so much as with Photoshop's usefulness in drawing and making art, and in that space are also Clip Studio Paint or Procreate on iPads, both quite popular with hobbyist and professional artists. Gimp isn't competing so much on the art creation side (or even making simple animations like Krita lets you do more easily) as it is on the editing and manipulation side. And when editing camera raws, you'd use Lightroom/Darktable/RawTherapee. Inkscape is vector graphics, a whole other use case and competitive landscape.
(Speaking of old/dead software, I remember using Xara Xtreme LX for a while, it was really slick...)
wetpaws 34 days ago [-]
[dead]
zoomerknowledge 35 days ago [-]
Photopea
ge96 35 days ago [-]
Windows has been annoying me with this notification that keeps popping up time to time "Hey want to use Adobe?" in the bottom right corner
reginald78 35 days ago [-]
I just disable the notifications entirely. If you don't want me to view it as a garbage dump stop putting all your garbage there.
hulitu 32 days ago [-]
> There are many such cases. Windows, too. And MS Word.
The current Windows and Word are a shitshow from an UX perspective.
PaulHoule 35 days ago [-]
My understanding is the old Photoshop was pretty bad. I went through a phase of being a student of file formats and the PSD format is practically a case study in how not to do it. (By my metrics, PDF was excellent for its time; they've been able to cram so much crazy stuff into PDF because the foundation is good)
When I first used it on a Mac circa '95 or '96, it felt like a legacy product when I was using it for web work because the color management features intended for print meant it would always screw the colors up if you output for the web [1] if you didn't disable color management, whereas the GIMP 'just worked' because it didn't color manage. [2]
To play devil's advocate, the old Photoshop was a huge lump sum which meant you'd buy one version and then go six years without updating it. The new one is more accessible to people. Also, I find the A.I. features in Photoshop pretty useful, it is easier than ever to zap things out of images: it took just seconds to disappear a cable in [3] for which the older tools didn't work so well. For [4] it removed a splotch and later I had it add a row of bricks to the bottom to improve the visual balance of the image.
Note people sure complained about Office in the '95 era, see [5]
[1] sRGB was new!
[2] Funny I do a lot of work for print now, some of which pushes the boundaries of color management, such as making red-cyan anaglyph stereograms
> User is dead. User remains dead. And we have killed him. How shall we comfort ourselves, the developers, the designers, the growth hackers? What was holiest and the final judge of all that the world has yet owned has bled to death under our A/B-tests and new features. Who will wipe this blood off us? What garbage collector is there for us to clean ourselves? What conference of atonement, what disruptive technology, what sacred meeting shall we have to invent?
Tasteless, but I felt, and still feel, like the notion of a user is truly lost. Somehow, the only technology which technically allows direct 1-many relationships between a small group of builders and a vast number of users, managed to create an industry which actively prevents and disincentives such relationships.
crabbone 35 days ago [-]
> Somehow?
Oh no! Working with end-users is madness. It's tiring, exhausting, counterproductive. Users don't know what they want, and will demand the worst possible solution for them. They will resist and circumvent security measures in the program. They will make sure to use the program in an unintended way and then endlessly complain about it not working in the way it was never meant to work.
The day I transitioned from B2C to B2B my emotional well-being improved tenfold.
Now, on a more serious note: making an individual user facing product is more expensive. It's easier to pitch and sell the product to an organization managing multiple users because the organization will agree and compromise on many aspects of the product, and then will create internal organizational policy for its users to use the product only in permitted ways. It will make feedback and improvements requests expensive for the users and will serve both as the customer surveyor and as the first tier of the customer support for the software shop. It's a match made in heaven (and that's how Microsoft and the likes built their empire). There's nothing surprising about that.
card_zero 35 days ago [-]
See also: customers, and clients. Getting in the way, wanting things, causing chaos. Every business runs much more smoothly without them.
rjbwork 35 days ago [-]
>Somehow, the only technology which technically allows direct 1-many relationships between a small group of builders and a vast number of users, managed to create an industry which actively prevents and disincentives such relationships.
Bean counters and social status games players came in and spewed money everywhere and said to the engineers: "do your engineer thing, we'll handle the money and the people".
kevingadd 35 days ago [-]
I think this string of questions from the middle of the post really gets to the heart of it:
⁃ Does the “user” feel respected by the software?
⁃ How does this software affect the mental health of the “user”?
⁃ How does the software fit into the rest of the “users” lifestyle?
⁃ Does this software help the “user” perform a task/entertain them without coercion?
A lot of modern software looks really bad if evaluated through the lens of these four questions.
dist-epoch 35 days ago [-]
> How does this software affect the mental health of the “user”?
We need to talk about Jira...
dartos 35 days ago [-]
Yeah… it’s awful.
The user is often not the end customer of any software, so it’s not optimized for their benefit.
amelius 35 days ago [-]
So, kill the advertising monetization scheme.
andrepd 35 days ago [-]
The more I think of it the more I conclude that advertisement truly is the root of so much that is rotten in the modern world.
Dansvidania 35 days ago [-]
advertising is a necessity, IMO. The problem is the adversarial way it is done.
How would you exercise your free choice of products, if you don't know about them?
Selling and advertising could be and has been done in a "positive-sum" way, it's just not as effective.
I would point to the _under-regulated_ stock market as the problem, causing short term profit being highly incentivised... but then again that's a slippery slope too and I don't have the macro-economics chops to propose where exactly is the problem and how to fix it :/
noirscape 35 days ago [-]
I think advertising is a necessity, but like many things, there's an expected boundary in place for where, how and how often you expect ads.
Take TV for example; advertising is expected during commercial breaks, which are clearly marked as such, around here are regulated for the public TV channels (so they can't interrupt regular programming) and have an explicit intro/outro segment to make it clear when regular programming resumes. Radio follows similar rules here.
Same with things like magazine ads; they're marked as separate from the page, usually explicitly don't follow the magazines style and are usually only a small part of the magazine.
By contrast, digital advertising's most insidious trick is that no such boundaries exist. Ads must be shoved everywhere in as many places as possible. You got some whitespace? Fill it with ads, ads and even more ads. People use adblockers? Directly insert your ad into the content and call it a sponsorship. People are looking at ads? We gotta track all of it so we can allegedly improve the clickthrough ratio and profile them, privacy be damned.
It's an industry entirely based on taking a hand when offered a finger to make a line go up. Ads aren't inherently the problem, the entire problem is the gluttony, greed and avarice that's embedded in the digital ads industry. (Which is funny given how much digital ad payout rates are a seeming race to the bottom, to the point where I wouldn't be surprised if most sites that run ads still aren't even sustained by them anymore.)
amelius 35 days ago [-]
> How would you exercise your free choice of products, if you don't know about them?
The problem is that advertising does two things:
1. inform people of the availability of the product/service
2. persuade people to buy said product
Point 2 is the evil part. Note however that we can easily remove 2 but still have 1. For example, show the information when the user wants to see it, as opposed to shoving it in their face (distraction) when they were doing something else. We could bring back yellow pages. Or have dedicated websites where users can search for products and services. It is not rocket science.
Dansvidania 35 days ago [-]
I don't disagree, but I would still argue that it would not work as well. The only way I'd see this working would be with heavy government intervention, and I think the british figure of speech for that is "fat chance".
amelius 35 days ago [-]
Yes, government should intervene here. But there is another figure of speech that goes like "don't let good be the enemy of perfect" ...
Dansvidania 35 days ago [-]
what would be the "good-not-perfect" solution?
amelius 35 days ago [-]
It probably needs more thought but fines for anyone who does not comply. You will still get ads that slip through, but it can be a good start.
I'd also like to point at tobacco commercials which are banned in many countries now. This ban works (it's probably not perfect but I cannot remember seeing such an ad recently, even online).
card_zero 35 days ago [-]
Keeping it in a designated place, where you have to go to look for it deliberately? Like yellow pages, as suggested? Sounds good to me.
When advertising was new, people really liked it. It sold newspapers, it was the thing people wanted before news.
atoav 35 days ago [-]
This is why I predominantly use CLI tools where it makes sense. Two CLI tools are more alike than two AI tools, theh tend to respect the user and their intelligence more, there is no sign in, it works together with other software and it will work for decades.
I'd love if GUI applications were similarly stringent or even had the goal of creating an ecosystem, but they don't, they are competing against each others trying to grasp user attention, bending them to their wills, locking them in. Not all of them of course, but the mental overhead with CLI is much smaller.
hello_computer 35 days ago [-]
CLI presently selects for users with reading comprehension, who are a hard sell for trojan horses. If the great unwashed ever took a shine to CLIs, CLIs would become just as bad. You can already see a bit of this in PowerShell (i.e. marketing & telemetry).
teucris 34 days ago [-]
I’ve read a lot of articles like this over the past decade, and books, and papers. I agree with most of it: we need to bring technology back “to the people”. But everything I’ve read, including this, has two problems:
1. Consistently, people reminisce about older tech that they loved, and wishing they could have stuff like that again. But the reality is that people like us, on HN, are not the average person. The tools we adored were great to us because we knew how to wield their power and felt empowered to do so. What parts of those applications could the broader population use easily?
2. What should we be using to create personal/folk/situated software? How do we even accomplish this goal? Again, we (HN readers) know the tools and feel empowered. But for the ideas in this article to come to light, many more people need to be empowered to solve their own problems and tailor their tools to their needs. What technologies should they use? I never get a good answer to that one.
ickelbawd 34 days ago [-]
I’ve been coming around to a similar point of view that modern software technology removes human agency. Everything is being automated—we thought it would free up our time for other things, but to my eyes we’ve become less free. AI is only going to accelerate this phenomenon—robbing us of even higher levels of agency as well as our ability to think independently and deeply. All in the name of efficiency and engagement. I struggle with this daily since I work in and have been steeped in tech for decades. I used to love it. Part of me still sees the good that modern technology has enabled too. I’m not sure what the solution is here besides logging off the internet and returning to live in the real world with real human interaction and slower, more meaningful connections.
PaulHoule 35 days ago [-]
I like the sentiment but the article itself ought to be edited to be 1/3 the length, certain themes should be broken out. The story of 'cyborg software' should be told in depth, for those of us who live that dream it doesn't need a lot of explanation but for other people it needs to be spelled out.
There are some interesting business patterns such as 'investment than disinvestment' which is common in communications applications. An application like Zoom, for instance, needs a lot of engineering work to deal with old computers, new computers, people with four cameras attached to their computers, who have slow internet connections, etc. A 'just works' experience is essential and the money is there to support it. A decade later it will be in a harvesting phase, the money won't be there to fix funny little bugs, and now it 'just doesn't work'.
There are other problems around recurring subscriptions, for which Creative Cloud is hardly the worst example. See the gaming industry, which has moved away from experiences that are 20-120 hours to wanting to be the one game you play for decades
tobr 35 days ago [-]
I got into interaction design and UX design because so much of it was so bad. At some point along the way it seems we got too good at it. Many of the points in this article frankly makes me feel somewhat embarrassed to describe myself as a UX designer. Maybe I should start to think of myself as a… personal computing designer? small software designer? dignity designer? (No, surely designing ”dignity” is somehow an even worse pretense than designing ”experience”.)
gyomu 35 days ago [-]
We lost the plot when interface design became UI/UX (and all the associated modern variants on this terminology).
The goal of an interface is clear: it is how a human interacts with a machine. Buttons, dials, latches, sliders - those are interfaces. We can reason about them, make taxonomies, determine what operations they are appropriate for (or not), and so on.
“User experience” tries to capture everything into a nebulous haze that exists not to serve a human with a task to accomplish that a tool will assist with - but a business and how it will capture “users” and guide them on the “journey” it deems most appropriate to reach its sales goals.
Design students won’t be able to formulate a cogent thought on what the properties of appropriate interface feedback are, but they’ll be great at cranking out “personas” and sleek landing pages that enumerate marketing points. Something’s rotten.
samiv 35 days ago [-]
We lost the plot when the MBA ass clowns took over and started adding "engagement" and other features that only serve the interest of the developer not the user.
The typical corporate software is now primarily serving the needs of the developer and the user is secondary. In many cases the user has no way but to succumb (due to lack of competition or enterprise/workplace policies etc) and eat their frustration because there's no alternative, or they're not allowed to use the alternative or the alternative is equally trash.
eXpl0it3r 35 days ago [-]
I disagree that UI/UX as interface design is where we lost the plot. After all UI mostly refers to the looks and UX to the interaction, both have always existed and on their own are certainly not bad.
For me one of the biggest issue is that neither the developer, nor the UI/UX designer are actual users of the software they write. If you don't actually understand how users end up using the software, best even skip the interviews with users, you'll never be able to write software that truly serves the users.
Additionally, you have the separation between developers and UI/UX which leads to another area where trade-off needs to be made to accommodate (new) requirements. In that sense you might be right, that when the developers created everything, they were able to shape something way more efficient and concise, but often also at the cost of looks and potential accessibility.
A second issue I see is, that software these days is always developed for the average user, where as in the past software was developed for experts. As such the look becomes "simplified" and the UX dictates a layout that is cumbersome to use for experts and power users.
slfnflctd 35 days ago [-]
I feel compelled to say that this is the most concise and accurate summary of the whole mess I've yet seen.
When you take something straightforward which is grounded in hard logic and measurable outcomes, and try to combine it with abstract concepts about feelings and organizational goals that overlap with multiple philosophies, a massive amount of space for endless argument over details is created.
card_zero 35 days ago [-]
I found an archive of Apple's Human Interface Guidelines.
Do they still have these? Anyway I present it as a curiosity. Maybe they're heroes for sticking relatively closely to objective considerations like how a slider should behave and when to go fullscreen, or maybe it's all their fault for being pretentious and tastefully elegant and dumbing things down with a 1-button mouse in the first place. Maybe it shows a smooth evolution from good intentions, and successful corralling of terrible erratic interface choices (from the dawn of time, i.e. the 80s and 90s) into something logical and standard, through into an inevitable rise of nebulous "user experience" over function. Or maybe it was all nebulous and abstract from the start, and the high point was in the middle, somewhere around 2000, where it went through a phase of nitty-gritty good sense which faded away again, I don't know.
It all kinda changed when interfaces had to be big enough to differentiate between buttons with your finger instead of a mouse.
andrepd 35 days ago [-]
> Something's rotten
It's called late capitalism. We cannot structure a society around not only profit, but infinite growth, and expect that not to give us any problems. It does.
You cannot just sell as many copies of Photoshop as last year: you must sell more, each quarter, and more expensive. You cannot sell as many phones as last year: you must make batteries non-removable so phones break quicker, you must spend 3x your R&D budget on ads to manipulate people on buying an identical phone to last year. Etc etc
moron4hire 35 days ago [-]
I think the big red flag with UI/UX was that it was a new term invented for a concept we already had: HCI. Human-Computer Interaction is what it was called when companies like Apple and Microsoft were releasing style guides for application developers to adhere to, to ensure their applications remained consistent with every other application on the system.
By inventing a new term, that generation of designers signature that they were ignorant of that past work. And the specific choice of terminology further signalled the shift in values for the designer. It was no longer about creating optimal interface points between the human and the computer, it was about creating "an experience".
It centered creating a distinct brand identity for the application through the design, a goal that is anathema to the goals of HCI. Because of this, it was not enough to use the native UI toolkit of the system, which had been carefully designed around consistent experience. It became necessary to reinvent UI tooling from scratch, with the primitive elements available in the browser engine being readily available for doing such. The cross-platform nature of the browser also presented an opportunity for the MBA- and SV-driven focus on monopoly to pursue total market dominance, rather than just creating one good app for the users you have on the one system.
skydhash 35 days ago [-]
> It centered creating a distinct brand identity for the application through the design, a goal that is anathema to the goals of HCI
Not really, as the constraints of HCI are relatively loose. The fact is that current UX designers don’t read the litterature and don’t take usability in account. Instead, UI design is just aesthetic and UX is ruled by sales.
A platform should be consistent, but you can add your variation as long as it’s usable. Which most UI toolkits allows.
atoav 35 days ago [-]
UX can be used and abused. The biggest sin is that instead of thinking about the big picture of someone sitting at their computer and trying to do a thing that requires multiple programs to work together, bad UX designers assume they start from scratch without any existing environment and have only to anseer to their own project.
Imagine if something like Command line pipes would exist for GUI applications and then ask yourself why it does not (the closest thing might be copy/paste).
If you as an UX designer try to empower your users and do that in a way that does not break all existing convention without a good reason, you will be fine.
A industrial designer can design medical injection systems for drug cartels with the goal of catching more sheep or for a medical non-profit with the goal of making actual medical help cheaper — design is not the problem, the business interest it might be used for is.
Designers aren't always in the position to make their decisions freely, but you can put your weight behind the right side and everybody in here will appreciate you for doing so.
ben_w 35 days ago [-]
> No, surely designing ”dignity” is somehow an even worse pretense than designing ”experience”
Much.
"Dignity" seems be be mostly used when it is missing — without any dignity, lost their dignity, helping others regain their dignity…
And Dignitas.
"Experience" seems still positive to me, at worst a bit cliché.
samiv 35 days ago [-]
Based on my 25 yoe in the industry the article is extremely cynical but unfortunately very much spot on.
Today in the corporate world the MBAs are running the show and they need continuous, growth and engagement. All needs of the developer and they manifest themselves in the software as features that are only there to serve the needs of the developer, not the user often combined with (deliberate) dark patterns and confusing UX just drive up the KPIs.
What a load of BS.
And the user often has no choice but to eat their frustration due to lack of alternatives, corporate/workplace policies or the alternatives being equally bad.
Unfortunately I expect the whole concept of "PC" as in Personal Computing is going to go away and will be replaced by "locked down, you don't own shit computing".
All the major platforms today are more or less locked down, Android, iOS, MacOS and Windows is on its way. I expect in the next 5 years Microsoft will introduce mandatory app signing and will lock the platform down so that the Microsoft store is the only place for installing apps. They can shove all candycrush, azure etc garbage in the users face who have no choice but to eat it.
Linux but is the only bastion of hope but unfortunately that's still (on the desktop) alpha quality and will likely never move past alpha.
Hopefully I can retire soon.
WhyNotHugo 35 days ago [-]
> I expect in the next 5 years Microsoft will introduce mandatory app signing and will lock the platform down so that the Microsoft store is the only place for installing apps.
This is already the case in lower end hardware.
> Linux but is the only bastion of hope but unfortunately that's still (on the desktop) alpha quality and will likely never move past alpha.
I fear that a lot of mayor software is suffering from a similar issue. Targeting a fantasy “user” who is both literate enough to install, maintain and use Linux, but illiterate enough that displaying keyboard shortcuts or showing a menu when right clicking would confuse them.
There’s little effort to stabilise software. The priority is often “I want all apps to use my chosen theme”, and not “I want apps that work out of the box”.
And there’s a huge trend of “X for gnome” or “Y for KDE”. Portable software has a become a niche thing.
kbolino 35 days ago [-]
I think a bifurcation is likely. You can't program these locked-down devices/systems on themselves. At the end of the day, some of us have to be able to write the code that makes these things work. So I think there will always be "unlocked" or "developer" operating systems and hardware. But those systems will be considered "too powerful" or "too dangerous" for mundane tasks, especially entertainment.
Right now, the legacy operating systems like Windows and macOS are trying to straddle the line. I think in the not too distant future you'll have to choose between being able to run a debugger and hack the OS vs. being able to watch shows, play video games, and access your bank account. I'm not exactly sure how the split will happen, it could be through totally different ecosystems, or an irreversible flag in the firmware, or maybe something else entirely, but it seems almost inevitable at this point.
stuartjohnson12 35 days ago [-]
Android has already gone this way - certain features like access to cardless payments are disabled if you're running a "not locked down" version of Android in anyway.
I bought a new 2nd hand phone recently and learned that one the hard way when I went to pay for my journey to work. The previous owner had enrolled the phone in the Android beta program, which, this being Google, is remotely controlled and required me to spend half an hour Googling to understand how to unenrol myself and my phone.
keyringlight 35 days ago [-]
What gets me about android is the "have your cake and eat it" way it approaches whether or not it's locked down/secure.
Over time the usage of android has changed as more users and more services adopted mobile, for a lot of people it is an appliance, and I can appreciate that payments and linking to bank accounts is a convincing argument for something like play integrity/safetynet. Where I think there's conflict is that android hasn't entirely moved away from the early days where it was something a bit closer to a personal computer with a high amount of freedom, or at least they haven't been clear or drawn a line in the sand (this is while the issue around third party app stores and if they must be somewhat open is still being resolved).
If you do some unlocking/rooting and what it enables, it either gets the bare minimum CYA notification or doesn't get flagged in the UI at all as presumably the users is meant to know what they're doing at that point, even if cases where apps refuse to run or just crash with no feedback to inform. I'd appreciate it more if they did put up a big red warning "this is happening because your phone is in insecure mode because you did X", but on the surface level it appears as the phone is fully operational but acting weird.
skydhash 35 days ago [-]
I don’t mind locked down hardware as long as it’s usable. I don’t open a microwave or a tv for fun, but I mind the experience while using them. If you want to do one thing, do it well. And do it even after the manufacturer collapses unless there’s a service involved. What I don’t like is when it’s my computing ressources that are involved, but they want to reassure the mothership that it’s legitimate use.
Let’s take an ereader as example. I can buy it knowing that it can only display books from Amazon Kindle and I need an active subscription or a license for the book. Or I can buy it knowing that I can display any ebook file I have. It’s one or another, or both together independantely.
Which is why I don’t mind the old App Store model. I know that it’s locked down and I can license apps which will be tied to my account. When it’s no longer supported, I can still download the version that work. But now the license is temporary so as soon as you’re not able to pay, the app breaks.
kbolino 35 days ago [-]
A locked-down device is pretty much definitionally a subscription and not a perpetuity. You cannot fix a locked-down device yourself, so the vendor has to do it for you. When the vendor stops doing that, all of the "benefits" of it being locked down (DRM-protected media, games with anticheat, secure bank access, etc.) cannot continue. I can see the case that locked devices should be easy to unlock when vendor support ends, but then they'll lose most of their functionality too, since the primary reasons for owning them in the first place go away with the unlocking process. An old tablet that can't access any content is little more than a really thin brick.
WhyNotHugo 34 days ago [-]
> I think a bifurcation is likely.
Surely Apple has some other OS used internally for debugging/testing hardware prototypes. I heard rumours that it's Linux, but it might just be something else built on their BSD core. Whatever it is, it's kept secret and not available to the general population.
> I think in the not too distant future you'll have to choose between being able to run a debugger and hack the OS vs. being able to watch shows, play video games, and access your bank account.
This is already the case on Android. Technically on iOS too; you can't run a debugger there at all.
kbolino 33 days ago [-]
Yes, I'm taking the current situation on mobile and extrapolating it to be the future of desktop computing as well. Tablets and Chromebooks are already there. But Windows and macOS are headed in that direction too. Will there still exist unlocked or at least unlockable versions of those operating systems? Or will developers have to use what will then be specialty hardware on which they can only run Linux (and other free operating systems)?
nolist_policy 33 days ago [-]
Its a bit of a hack, but on Android apps can loopback connect to the phone with adb.
Also, ptrace works as well.
openrisk 35 days ago [-]
> Folk music is enmeshed in a particular culture. It is knowledge transmitted across generations, but which evolves to meet the challenges of the times and changing sensibilities.
This is a beautiful and unexpected connection. Drawing analogies between software and other forms of cultural expression is a long overdue mental shift. The use of linguistic expressions such "tech" and "engineering" highlights the prevailing desire to think of software as some sort of thing apart, less social, less political (and thus something we can profitably pursue with fewer moral qualms).
The switch from the original mentality of software as a product (literally shipped in a box), to the current business model of "user as a product" and software merely being the bait and hook is so profound that we are not really talking about the same industry anymore.
Not clear where the strange and twisted journey of software would lead. The infinite reproducibility at zero cost is not something current economic systems can handle. The enshittification might continue, further enshittifying society or open source becomes the norm.
layer8 34 days ago [-]
This reminds me of the concept of “user” as presented in the 1982 Tron movie, where users were regarded as sovereigns with god-like powers by the software (“programs”). The notion of “user” in the article is almost the reverse. We should return to that older conception.
lokimedes 35 days ago [-]
I like the idea of either augmented computing or calm computing. Either it is fitting comfortable on/in me, or it is seamlessly integrated in the ambient environment. The basic idea that we are a mechanistic factory line, where computing are industrial tools, and I have to master my role in the process, or get my hands malested is sicking.
I really hope the LLM wave makes this view realistic. We still mostly see tools that enhances our use of tools, but I believe LLM's strongest functionality is that it provides efficient translation between human needs and machine instructions.
skydhash 35 days ago [-]
What about just computing? Gving us tools and the manuals that comes with them. You bought a computer, you bought/get a set of software that helps accomplish some tasks and that’s it. Mac and macOS used to be that, but they’ve wrapped it up in harzadous “services” brought by MBA. Windows used to be that, more versatile and fragile, but they’ve destroyed it. Linux is that, but you have to learn CLI speak.
K0balt 34 days ago [-]
The shitshow that has washed over the internet is about to become a parched wasteland as human eyes turn away, towards AI ux. The internet will become a place for AI agents to search out services to provide to their users through their own UX , be it voice, audiovisual, text, or some kind of universal AI “desktop OS” interface.
API endpoints will be the new UX, and adversarial prompt engineering will be the new “dark pattern”.
The internet will become a place that your AI goes to discover and do things for you.
zombot 35 days ago [-]
I wish I could call it cynicism, but it's just the unvarnished truth.
mediumsmart 35 days ago [-]
“Users” are a commodity, a hot one perhaps, but like any other commodity, can be bought and sold."
that reminds me of an ancient saying - change your mind and you change your world
foobarbecue 34 days ago [-]
Words tend to accumulate negative connotations, and then it's tempting to discard or ban them for synonyms. I agree with most of what's said in the article, but the theme that calling a spade a shovel will fix things is silly. Misaligned incentives have to be realigned, and that means stepping back from extreme crapitalism.
Nasrudith 35 days ago [-]
There were wiffs of it before, but I immediately knew the weiter was full of crap once he whipped out the communist-conspiratorial view of Taylorism. Talk about a red-flag!
hello_computer 35 days ago [-]
Yes cyborg. The more you unite with the machine, the more you will be used like one. There are no free rides. No calculating or politicking your way out of the metaphysical toll.
huijzer 35 days ago [-]
One big problem, I think, is how big tech gained monopolies and now could live easily for years. This excites me about AI. It’s a disruptive technology that is likely to disrupt some of the big tech just like Google disrupted Altavista. Big tech will probably struggle to adapt whereas small teams will come out of nowhere with better solutions, see Cursor AI and DeepSeek as recent examples.
Take Photoshop, for example, first released in 1987, last updated yesterday.
Use it and you can see the two ages like rings in a tree. At the core of Photoshop is a consistent, powerful, tightly-coded, thoughtfully-designed set of tools for creating and manipulating images. Once you learn the conventions, it feels like the computer is on your side, you're free, you're force-multiplied, your thoughts are manifest. It's really special and you can see there's a good reason this program achieved total dominance in its field.
And you can also see, right beside and on top of and surrounding that, a more recent accretion disc of features with a more modern sensibility. Dialogs that render in web-views and take seconds to open. "Sign in". Literal advertisements in the UI, styled to look like tooltips. You know the thing that pops up to tell you about the pen tool? There's an identically-styled one that pops up to tell you about Adobe Whatever, only $19.99/mo. And then of course there's Creative Cloud itself.
This is evident in Mac OS X, too, another piece of software that spans both eras. You've still got a lot of the stuff from the 2000s, with 2000s goals like being consistent and fast and nice to use. A lot of that is still there, perhaps because Apple's current crop of engineers can't really touch it without breaking it (not that it always stops them, but some of them know their limits). And right next to and amongst that, you've got ads in System Settings, you've got Apple News, you've got Apple Books that breaks every UI convention it can find.
There are many such cases. Windows, too. And MS Word.
One day, all these products will be gone, and people will only know MBA-ware. They won't know it can be any other way.
I know that front-end devs won't like this, but modern web development is the epitome of quantity (both in terms of reach and of the insane amount of approaches used to compensate for the browser's constraints) over quality, and I suppose that will stay the same forever now that any modern machine can run the complexity equivalent of multiple operating systems while showing cat pictures.
Primarily that constraint is "looks good in a presentation for an MBA with a 30-second attention span". Secondarily "new and hook-y enough to pull people in".
That said...
HIG UIs are good at what they're good at. But there is an element of a similar phenomenon to how walled-gardens (Facebook most of all, but also Google, Slack, Discord..) took over from open, standards-based protocols and clients. Their speed of integration and iteration gave them an evolutionary edge that the open-client world couldn't keep up with.
Similarly if you look at e.g. navigation apps or recipe apps. Can an HIG UI do a fairly good job? Sure, and in ways that are predictable, accessible, intuitive and maybe even scriptable. But a scrolly, amorphous web-style UI will be able to do the job quicker and with more distinctive branding/style and less visual clutter.
Basically I don't think a standardised child-of-HIG formalised UI/UX grammar could keep up with the pace of change the last 10-15 years. Probably the nearest we have is Material Design?
Seems to me to be a combination of things, none of which indicate that the new products are implicitly better than the old. The old products could’ve incorporated the best elements of the new. But there are a few problems with that:
- legacy codebases are harder to change, it’s easier to just replace them, at least until the new system becomes legacy. slack and discord are now at the “helpful onboarding tooltip” stage
- the tooling evolved: languages, debuggers, IDEs, design tools, collaboration tools and computers themselves all evolved in the time since those HIG UIs were originally released. That partially explains how rapidly the replacements could be built. and, true, there was time for the UX to sink in and think about what would be nice to add, like reactjis in chat
- incentive structures: VCs throw tons of money at a competing idea in hopes that it pays off big by becoming the new standard. They can’t do that with either open source or an existing enterprise company
I think the issue was less one of legacy codebases, and more that getting consensus on protocols and so on is _always_ slow and difficult, and that expands exponentially with complexity. And as the user-base expands, the median user's patience for that stuff drops. "What the hell is an SMTP Server and why should I care what my setting is" kind of stuff.
Meanwhile the walled gardens can deliver a plug-and-play experience across authentication, identity, messaging, content, you name it.
And this against a background of OS platforms (the original owners of HIGs) becoming less relevant vs Web2 property owners, and that strict content/presentation separation on the Web never really caught on (or rather, that JS single-pagers which violate those rules are cheaper and sexier). Plus a shift to mobile which has, despite Apple's efforts, never strictly enforced standards - to the extent that there's no real demand from users that apps should adhere to a particular set of rules.
It even happens in the FOSS world. Open Source theorists tell us all the time that "free" only means "free-as-in-freedom". That we can share the code and sell the builds.
But whenever someone actually wants to charge users money for their own FOSS apps, even if it's only a few bucks to pay for hosting and _some_ of the work, outraged users quickly fork the project to offer free builds. And those forks never, ever contribute back to the project. All they do is `git pull && git merge && git push`.
Maybe the Google+Apple move was a strategy against piracy. Or maybe it was a move against the FOSS movement. And maybe the obsession with zero-dollars software was a mistake. Piracy advocates thought they were being revolutionaries, and in the end we ended up with an even worse world.
Then again maybe the extra lag (and jitter!) is gets a pass because its part of these products positioning themselves in the niche of "ask the controlling overlord if you may do something" rather than "dependable tool that is an extension of your own will".
Nowadays every new "feature" introduced in MS Word is just randomly appended to the right end of the main ribbon. As it is now you open a default install of MS Word and at least 1/3 of the ribbon is stuff that is being pushed to the users, not necessarily stuff that users want to use.
At least I can keep customizing it to remove the new crap they add, but how long until this customization ability is removed for "UI consistency" ?
Photoshop has a terrible set of conventions. I'd take Macromedia Fireworks any day of the week instead. But Adobe bought Macromedia and gradually killed Fireworks in 8 years due to "overlap in functionality" between it and 3 other Adobe products.
That action pretty much enabled enshi*tification of Photoshop which wrapped the terrible core of Photoshop with terrible layers of web and ads.
One of the reasons I like Google Workplace. I'm mostly not a power user these days so the simpler option set really works for me even if I very rarely run into something I can't quite do like I'd prefer.
Word had like 20 different toolbars. If you wanted to work with tables, you had to open the Table toolbar. If you wanted to work with WordArt, you opened the WordArt toolbar. You could be the 99% user and still end up with 10 toolbars open even if you only needed one button on a certain toolbar. On a small monitor, half your screen consisted of your document and the other half toolbars.
I can’t remember if you could customize toolbars but not a lot of people would spend the time to do that. I sure never did.
You could, but the typical user only did few customizations (if any).
If anything this kind of layout patterns are better today than they were back then. What the OP is complaining about was ALWAYS a problem for any long-lived software. But given how much _older_ some software is today it is no wonder it is way more noticeable.
Win95 had a bunch of Win3.1 settings dialogs just like Win11 still has a bunch of Windows98 setting dialogs (the network adapter TCP/IP configuration one comes to mind).
I mean just look at this:
https://www.reddit.com/r/Windows10/comments/l85hsx/windows_3...
Just like vinyl made a comeback 'real' software will come back as well, maybe running in an emulator in a browser. Yes, there will be copyright problems, yes there will be hurdles but if and when MBAware becomes the norm software will persevere. Free software for sure, commercial software most likely even if legally on shaky foundations. The tighter they grip, the more users will slip from their clutches.
Well, at least one of us must be living in a bubble.
The format had outpaced CDs recently [0], but I think due to a decline in CD sales too.
[0] (2023) https://www.bbc.com/news/64919126.amp
https://consequence.net/2023/04/half-vinyl-buyers-record-pla...
We live the timeline where the Feringi were the ones who bootstrapped the Borg with their hyper-consumption-trade culture
All will be consumed into the financial market (Borg cube) eventually
It's funny that today there still isn't a free image editing software comparable with the Photoshop from 2000. Krita is close, but still cumbersome to use.
I think the same thing happened to Windows and Office.
Having said that, there's probably another elephant in the room, namely the current generation that grew up with phones and tablets and didn't really learn to use traditional computers fluently
I'm an old Photoshop user who has GIMP now. I'd like to do a breakdown of everything that's wrong with its interface behavior, but analysing exactly how it does behave would be a major mission. There's something - several things - wrong with how it selects, moves, deselects, selects layers, and zooms, compared to what I expect for the workflow I try to have. Possibly this is just a matter of needing to learn new conventions, but possibly I have learned the GIMP conventions and they're just clunky.
Interesting, though, since this is organic, grass roots, free software interface crappiness, not the coercive corporate kind.
It seems clear to me that most people get used to some interface and anything different is wrong. I saw that a lot with Windows users changing to Mac or linux interfaces. Anything 100% identical is "interface crappiness". Apple interfaces have always run circles around Microsoft's, that copied things from Apple without understanding the fundamentals or hiring proper designers.
My problem with GIMP was always that it was not as powerful for the professional, like the support for color spaces with lots of bits per channel. That looks solved now, although I had not time to test is personally.
On the other hand, you could program GIMP much easier with Script Fu (a Lisp dialect) with less restrictions, like with most open source software.
On the whole I didn't think there was much difference, apart from a general vibe of crassness on Windows, which had no clear cause. But that's switching from Classic Mac OS: you're probably talking about the new OSX BSD linuxy one. Besides, it would have been Win2K I switched to, which was one of the best iterations.
I used Linux for a while too, but that was XFCE, so again kind of samey. Mainly I remember constantly having to go through Sudo whenever I wanted it to do anything, that was the distinctive interface difference.
Color spaces, a closed book to me. I can't stand color spaces, they were always some sort of unwieldy mess of interest to other people. People who print things, maybe, I don't know. From my perspective, they caused a lot of pretentious confusion when people were trying to make images on computers for viewing on computers and for some reason didn't do the natural thing and think in three bytes of RGB.
Scripting, I hadn't even thought about, and I'll give you that one. Photoshop had a mechanism for recording actions and playing them back, and it produced a linear list of actions, with no control flow. I definitely wanted a better scripting mechanism.
I wrote a little tkinter program to do it because, not least, tkinter is in the Python standard library.
I found out my stereograms looked OK in tkinter but when I exported the images to the web there where ghosts. What I found out was that modern GUI frameworks do color management in the sense that they output (r,g,b) triples in the color space of your monitor, which is what you get when you take a screenshot on Windows. So if you have a wide gamut monitor you can do experiments where you have a (0, 192, 0) color in an image specified in sRGB and then find it was (16, 186, 15) in a screenshot. [3] Drove me crazy until I figured it out.
What was funny was that tkinter was so ancient that it didn't do any color correction, it just blasted out (0, 192, 0) when you asked for (0, 192, 0).
It also turns out to be a problem in printing, where the CMYK printer presents itself as having an RGB color space where the green is very saturated but never gets very bright, and sRGB green also has red in it, but if you attach the printer's color profile to an image you can force a saturated green. A lot of mobile devices are going in the Display P3 direction so you can get better results publishing files in that format.
That and a few other print projects (so easy to screw up a $150 fabric printing job) have gotten me to care a lot about color management.
[1] https://en.wikipedia.org/wiki/Anaglyph_3D
[2] https://en.wikipedia.org/wiki/Vergence-accommodation_conflic...
[3] the native green on the monitor is more saturated than the sRGB green so it adds a little red and blue to make it look the smae!
https://www.contrado.com/print-on-demand
and something I found amusing was that almost all of them will only take files in sRGB format even though they actually support a gamut which covers some colors better than sRGB and other colors worse. My monitor supports Adobe RGB and covers both spaces pretty well.
I think, however, that they have more quality problems if people send files that have different color profiles so they just take sRGB.
I had some print jobs go terribly bad, turns out yellow flowers are out of gamut for my camera, for sRGB and CMYK, If you try to print something that has out of gamut colors, the printer will do something to put them into the gamut and you might not like it. I learned to turn on the gamut warning for Photoshop and bring the colors into the CMYK gamut before I print, even if I am sending in an sRGB file. It didn't bother me so much when I was printing 'cards' with my Epson ET-8550, but once I had orders come back ruined, I figured it all out.
I just appreciate that there are many options, and can talk to each other pretty well. If one becomes unusable, I have others, and sometimes there are newer ones. I did a stage banner project last December but I had Gimp, Krita, and Inkscape all open at the same time. (With a quick use of an old version of Illustrator in Wine to export into an illustrator template matching the dimension outlines and particular color space the printing company needed...)
Photoshop tried to be the everything tool, and it probably is and will continue to be the best kitchen sink (and if I knew it better and had a license, it probably could have sufficed by itself for my project), but for any specific thing there's going to be something else that's better for at least that thing (maybe even one of Adobe's other products, like Illustrator). Krita isn't competing with Photoshop so much as with Photoshop's usefulness in drawing and making art, and in that space are also Clip Studio Paint or Procreate on iPads, both quite popular with hobbyist and professional artists. Gimp isn't competing so much on the art creation side (or even making simple animations like Krita lets you do more easily) as it is on the editing and manipulation side. And when editing camera raws, you'd use Lightroom/Darktable/RawTherapee. Inkscape is vector graphics, a whole other use case and competitive landscape.
(Speaking of old/dead software, I remember using Xara Xtreme LX for a while, it was really slick...)
The current Windows and Word are a shitshow from an UX perspective.
When I first used it on a Mac circa '95 or '96, it felt like a legacy product when I was using it for web work because the color management features intended for print meant it would always screw the colors up if you output for the web [1] if you didn't disable color management, whereas the GIMP 'just worked' because it didn't color manage. [2]
To play devil's advocate, the old Photoshop was a huge lump sum which meant you'd buy one version and then go six years without updating it. The new one is more accessible to people. Also, I find the A.I. features in Photoshop pretty useful, it is easier than ever to zap things out of images: it took just seconds to disappear a cable in [3] for which the older tools didn't work so well. For [4] it removed a splotch and later I had it add a row of bricks to the bottom to improve the visual balance of the image.
Note people sure complained about Office in the '95 era, see [5]
[1] sRGB was new!
[2] Funny I do a lot of work for print now, some of which pushes the boundaries of color management, such as making red-cyan anaglyph stereograms
[3] https://bsky.app/profile/up-8.bsky.social/post/3lh7qqbqra22y
[4] https://mastodon.social/@UP8/110607460518784045
[5] https://www.amazon.com/Office-97-Annoyances-Lee-Hudspeth/dp/...
> User is dead. User remains dead. And we have killed him. How shall we comfort ourselves, the developers, the designers, the growth hackers? What was holiest and the final judge of all that the world has yet owned has bled to death under our A/B-tests and new features. Who will wipe this blood off us? What garbage collector is there for us to clean ourselves? What conference of atonement, what disruptive technology, what sacred meeting shall we have to invent?
Tasteless, but I felt, and still feel, like the notion of a user is truly lost. Somehow, the only technology which technically allows direct 1-many relationships between a small group of builders and a vast number of users, managed to create an industry which actively prevents and disincentives such relationships.
Oh no! Working with end-users is madness. It's tiring, exhausting, counterproductive. Users don't know what they want, and will demand the worst possible solution for them. They will resist and circumvent security measures in the program. They will make sure to use the program in an unintended way and then endlessly complain about it not working in the way it was never meant to work.
The day I transitioned from B2C to B2B my emotional well-being improved tenfold.
Now, on a more serious note: making an individual user facing product is more expensive. It's easier to pitch and sell the product to an organization managing multiple users because the organization will agree and compromise on many aspects of the product, and then will create internal organizational policy for its users to use the product only in permitted ways. It will make feedback and improvements requests expensive for the users and will serve both as the customer surveyor and as the first tier of the customer support for the software shop. It's a match made in heaven (and that's how Microsoft and the likes built their empire). There's nothing surprising about that.
Bean counters and social status games players came in and spewed money everywhere and said to the engineers: "do your engineer thing, we'll handle the money and the people".
⁃ Does the “user” feel respected by the software?
⁃ How does this software affect the mental health of the “user”?
⁃ How does the software fit into the rest of the “users” lifestyle?
⁃ Does this software help the “user” perform a task/entertain them without coercion?
A lot of modern software looks really bad if evaluated through the lens of these four questions.
We need to talk about Jira...
The user is often not the end customer of any software, so it’s not optimized for their benefit.
How would you exercise your free choice of products, if you don't know about them?
Selling and advertising could be and has been done in a "positive-sum" way, it's just not as effective.
I would point to the _under-regulated_ stock market as the problem, causing short term profit being highly incentivised... but then again that's a slippery slope too and I don't have the macro-economics chops to propose where exactly is the problem and how to fix it :/
Take TV for example; advertising is expected during commercial breaks, which are clearly marked as such, around here are regulated for the public TV channels (so they can't interrupt regular programming) and have an explicit intro/outro segment to make it clear when regular programming resumes. Radio follows similar rules here.
Same with things like magazine ads; they're marked as separate from the page, usually explicitly don't follow the magazines style and are usually only a small part of the magazine.
By contrast, digital advertising's most insidious trick is that no such boundaries exist. Ads must be shoved everywhere in as many places as possible. You got some whitespace? Fill it with ads, ads and even more ads. People use adblockers? Directly insert your ad into the content and call it a sponsorship. People are looking at ads? We gotta track all of it so we can allegedly improve the clickthrough ratio and profile them, privacy be damned.
It's an industry entirely based on taking a hand when offered a finger to make a line go up. Ads aren't inherently the problem, the entire problem is the gluttony, greed and avarice that's embedded in the digital ads industry. (Which is funny given how much digital ad payout rates are a seeming race to the bottom, to the point where I wouldn't be surprised if most sites that run ads still aren't even sustained by them anymore.)
The problem is that advertising does two things:
1. inform people of the availability of the product/service
2. persuade people to buy said product
Point 2 is the evil part. Note however that we can easily remove 2 but still have 1. For example, show the information when the user wants to see it, as opposed to shoving it in their face (distraction) when they were doing something else. We could bring back yellow pages. Or have dedicated websites where users can search for products and services. It is not rocket science.
I'd also like to point at tobacco commercials which are banned in many countries now. This ban works (it's probably not perfect but I cannot remember seeing such an ad recently, even online).
When advertising was new, people really liked it. It sold newspapers, it was the thing people wanted before news.
I'd love if GUI applications were similarly stringent or even had the goal of creating an ecosystem, but they don't, they are competing against each others trying to grasp user attention, bending them to their wills, locking them in. Not all of them of course, but the mental overhead with CLI is much smaller.
1. Consistently, people reminisce about older tech that they loved, and wishing they could have stuff like that again. But the reality is that people like us, on HN, are not the average person. The tools we adored were great to us because we knew how to wield their power and felt empowered to do so. What parts of those applications could the broader population use easily?
2. What should we be using to create personal/folk/situated software? How do we even accomplish this goal? Again, we (HN readers) know the tools and feel empowered. But for the ideas in this article to come to light, many more people need to be empowered to solve their own problems and tailor their tools to their needs. What technologies should they use? I never get a good answer to that one.
There are some interesting business patterns such as 'investment than disinvestment' which is common in communications applications. An application like Zoom, for instance, needs a lot of engineering work to deal with old computers, new computers, people with four cameras attached to their computers, who have slow internet connections, etc. A 'just works' experience is essential and the money is there to support it. A decade later it will be in a harvesting phase, the money won't be there to fix funny little bugs, and now it 'just doesn't work'.
There are other problems around recurring subscriptions, for which Creative Cloud is hardly the worst example. See the gaming industry, which has moved away from experiences that are 20-120 hours to wanting to be the one game you play for decades
The goal of an interface is clear: it is how a human interacts with a machine. Buttons, dials, latches, sliders - those are interfaces. We can reason about them, make taxonomies, determine what operations they are appropriate for (or not), and so on.
“User experience” tries to capture everything into a nebulous haze that exists not to serve a human with a task to accomplish that a tool will assist with - but a business and how it will capture “users” and guide them on the “journey” it deems most appropriate to reach its sales goals.
Design students won’t be able to formulate a cogent thought on what the properties of appropriate interface feedback are, but they’ll be great at cranking out “personas” and sleek landing pages that enumerate marketing points. Something’s rotten.
The typical corporate software is now primarily serving the needs of the developer and the user is secondary. In many cases the user has no way but to succumb (due to lack of competition or enterprise/workplace policies etc) and eat their frustration because there's no alternative, or they're not allowed to use the alternative or the alternative is equally trash.
For me one of the biggest issue is that neither the developer, nor the UI/UX designer are actual users of the software they write. If you don't actually understand how users end up using the software, best even skip the interviews with users, you'll never be able to write software that truly serves the users.
Additionally, you have the separation between developers and UI/UX which leads to another area where trade-off needs to be made to accommodate (new) requirements. In that sense you might be right, that when the developers created everything, they were able to shape something way more efficient and concise, but often also at the cost of looks and potential accessibility.
A second issue I see is, that software these days is always developed for the average user, where as in the past software was developed for experts. As such the look becomes "simplified" and the UX dictates a layout that is cumbersome to use for experts and power users.
When you take something straightforward which is grounded in hard logic and measurable outcomes, and try to combine it with abstract concepts about feelings and organizational goals that overlap with multiple philosophies, a massive amount of space for endless argument over details is created.
Here's 1985:
https://archive.org/details/apple-hig/1985_Apple_II_Human_In...
Here's 2013:
https://archive.org/details/apple-hig/MacOSX_HIG_2013_10_22/...
(More in sidebar)
Do they still have these? Anyway I present it as a curiosity. Maybe they're heroes for sticking relatively closely to objective considerations like how a slider should behave and when to go fullscreen, or maybe it's all their fault for being pretentious and tastefully elegant and dumbing things down with a 1-button mouse in the first place. Maybe it shows a smooth evolution from good intentions, and successful corralling of terrible erratic interface choices (from the dawn of time, i.e. the 80s and 90s) into something logical and standard, through into an inevitable rise of nebulous "user experience" over function. Or maybe it was all nebulous and abstract from the start, and the high point was in the middle, somewhere around 2000, where it went through a phase of nitty-gritty good sense which faded away again, I don't know.
It all kinda changed when interfaces had to be big enough to differentiate between buttons with your finger instead of a mouse.
It's called late capitalism. We cannot structure a society around not only profit, but infinite growth, and expect that not to give us any problems. It does.
You cannot just sell as many copies of Photoshop as last year: you must sell more, each quarter, and more expensive. You cannot sell as many phones as last year: you must make batteries non-removable so phones break quicker, you must spend 3x your R&D budget on ads to manipulate people on buying an identical phone to last year. Etc etc
By inventing a new term, that generation of designers signature that they were ignorant of that past work. And the specific choice of terminology further signalled the shift in values for the designer. It was no longer about creating optimal interface points between the human and the computer, it was about creating "an experience".
It centered creating a distinct brand identity for the application through the design, a goal that is anathema to the goals of HCI. Because of this, it was not enough to use the native UI toolkit of the system, which had been carefully designed around consistent experience. It became necessary to reinvent UI tooling from scratch, with the primitive elements available in the browser engine being readily available for doing such. The cross-platform nature of the browser also presented an opportunity for the MBA- and SV-driven focus on monopoly to pursue total market dominance, rather than just creating one good app for the users you have on the one system.
Not really, as the constraints of HCI are relatively loose. The fact is that current UX designers don’t read the litterature and don’t take usability in account. Instead, UI design is just aesthetic and UX is ruled by sales.
A platform should be consistent, but you can add your variation as long as it’s usable. Which most UI toolkits allows.
Imagine if something like Command line pipes would exist for GUI applications and then ask yourself why it does not (the closest thing might be copy/paste).
If you as an UX designer try to empower your users and do that in a way that does not break all existing convention without a good reason, you will be fine.
A industrial designer can design medical injection systems for drug cartels with the goal of catching more sheep or for a medical non-profit with the goal of making actual medical help cheaper — design is not the problem, the business interest it might be used for is.
Designers aren't always in the position to make their decisions freely, but you can put your weight behind the right side and everybody in here will appreciate you for doing so.
Much.
"Dignity" seems be be mostly used when it is missing — without any dignity, lost their dignity, helping others regain their dignity…
And Dignitas.
"Experience" seems still positive to me, at worst a bit cliché.
Today in the corporate world the MBAs are running the show and they need continuous, growth and engagement. All needs of the developer and they manifest themselves in the software as features that are only there to serve the needs of the developer, not the user often combined with (deliberate) dark patterns and confusing UX just drive up the KPIs.
What a load of BS.
And the user often has no choice but to eat their frustration due to lack of alternatives, corporate/workplace policies or the alternatives being equally bad.
Unfortunately I expect the whole concept of "PC" as in Personal Computing is going to go away and will be replaced by "locked down, you don't own shit computing".
All the major platforms today are more or less locked down, Android, iOS, MacOS and Windows is on its way. I expect in the next 5 years Microsoft will introduce mandatory app signing and will lock the platform down so that the Microsoft store is the only place for installing apps. They can shove all candycrush, azure etc garbage in the users face who have no choice but to eat it.
Linux but is the only bastion of hope but unfortunately that's still (on the desktop) alpha quality and will likely never move past alpha.
Hopefully I can retire soon.
This is already the case in lower end hardware.
> Linux but is the only bastion of hope but unfortunately that's still (on the desktop) alpha quality and will likely never move past alpha.
I fear that a lot of mayor software is suffering from a similar issue. Targeting a fantasy “user” who is both literate enough to install, maintain and use Linux, but illiterate enough that displaying keyboard shortcuts or showing a menu when right clicking would confuse them.
There’s little effort to stabilise software. The priority is often “I want all apps to use my chosen theme”, and not “I want apps that work out of the box”.
And there’s a huge trend of “X for gnome” or “Y for KDE”. Portable software has a become a niche thing.
Right now, the legacy operating systems like Windows and macOS are trying to straddle the line. I think in the not too distant future you'll have to choose between being able to run a debugger and hack the OS vs. being able to watch shows, play video games, and access your bank account. I'm not exactly sure how the split will happen, it could be through totally different ecosystems, or an irreversible flag in the firmware, or maybe something else entirely, but it seems almost inevitable at this point.
I bought a new 2nd hand phone recently and learned that one the hard way when I went to pay for my journey to work. The previous owner had enrolled the phone in the Android beta program, which, this being Google, is remotely controlled and required me to spend half an hour Googling to understand how to unenrol myself and my phone.
Over time the usage of android has changed as more users and more services adopted mobile, for a lot of people it is an appliance, and I can appreciate that payments and linking to bank accounts is a convincing argument for something like play integrity/safetynet. Where I think there's conflict is that android hasn't entirely moved away from the early days where it was something a bit closer to a personal computer with a high amount of freedom, or at least they haven't been clear or drawn a line in the sand (this is while the issue around third party app stores and if they must be somewhat open is still being resolved).
If you do some unlocking/rooting and what it enables, it either gets the bare minimum CYA notification or doesn't get flagged in the UI at all as presumably the users is meant to know what they're doing at that point, even if cases where apps refuse to run or just crash with no feedback to inform. I'd appreciate it more if they did put up a big red warning "this is happening because your phone is in insecure mode because you did X", but on the surface level it appears as the phone is fully operational but acting weird.
Let’s take an ereader as example. I can buy it knowing that it can only display books from Amazon Kindle and I need an active subscription or a license for the book. Or I can buy it knowing that I can display any ebook file I have. It’s one or another, or both together independantely.
Which is why I don’t mind the old App Store model. I know that it’s locked down and I can license apps which will be tied to my account. When it’s no longer supported, I can still download the version that work. But now the license is temporary so as soon as you’re not able to pay, the app breaks.
Surely Apple has some other OS used internally for debugging/testing hardware prototypes. I heard rumours that it's Linux, but it might just be something else built on their BSD core. Whatever it is, it's kept secret and not available to the general population.
> I think in the not too distant future you'll have to choose between being able to run a debugger and hack the OS vs. being able to watch shows, play video games, and access your bank account.
This is already the case on Android. Technically on iOS too; you can't run a debugger there at all.
Also, ptrace works as well.
This is a beautiful and unexpected connection. Drawing analogies between software and other forms of cultural expression is a long overdue mental shift. The use of linguistic expressions such "tech" and "engineering" highlights the prevailing desire to think of software as some sort of thing apart, less social, less political (and thus something we can profitably pursue with fewer moral qualms).
The switch from the original mentality of software as a product (literally shipped in a box), to the current business model of "user as a product" and software merely being the bait and hook is so profound that we are not really talking about the same industry anymore.
Not clear where the strange and twisted journey of software would lead. The infinite reproducibility at zero cost is not something current economic systems can handle. The enshittification might continue, further enshittifying society or open source becomes the norm.
I really hope the LLM wave makes this view realistic. We still mostly see tools that enhances our use of tools, but I believe LLM's strongest functionality is that it provides efficient translation between human needs and machine instructions.
API endpoints will be the new UX, and adversarial prompt engineering will be the new “dark pattern”.
The internet will become a place that your AI goes to discover and do things for you.
that reminds me of an ancient saying - change your mind and you change your world