His Amazon stint was mentioned in Steve Yegge's famous "Platforms" rant [1] where the reason for his departure was described less amicably:
> Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon's retail site. He hired Larry Tesler, Apple's Chief Scientist and probably the very most famous and respected human-computer interaction expert in the entire world, and then ignored every goddamn thing Larry said for three years until Larry finally -- wisely -- left the company. Larry would do these big usability studies and demonstrate beyond any shred of doubt that nobody can understand that frigging website, but Bezos just couldn't let go of those pixels, all those millions of semantics-packed pixels on the landing page. They were like millions of his own precious children. So they're all still there, and Larry is not.
The true reason for his departure is just a subject for gossip, but even today I agree that the Amazon store UI is confusingly dense and complicated, surprisingly bad UX for a Big Tech consumer-facing company.
My main gripe with Amazon is how slow it is. It takes me like 30 seconds to change my shipping address and payment method before placing an order. I always assume it’s because they’re using Glacier on AWS to drive that functionality.
That sounds more like your hardware/internet than it does Amazon's fault - I have never had any delay when changing address or payment method while placing an order, and I place a lot of orders on Amazon.
I timed it on my computer, in between my clicks there was about 4 seconds delay total when changing payment method and changing shipping address. Your "30 seconds" delay is definitely something specific to your computing platform. Maybe try upgrading your Pentium 1 computer?
>Do I break your balls further by pointing out there was never a “Pentium 1” CPU?
You could try to be pedantic to distract from your obvious problem that your computer hardware is slow. "Pentium 1" means the original "Pentium" CPU, it is meant to disambiguate because "Pentium" could refer to Pentium 2, or Pentium 3, but I wanted to be clear that your computer is as slow as an original Pentium computer. But sure, go ahead and be pedantic instead of admitting your computer is slow.
>Do I break your balls too and say “I’ve never had any delay” and “there was about 4 seconds delay” are mutually-exclusive?
I said 4 seconds *TOTAL* delay. So that's 2 seconds delay across two actions, which I don't consider to be any real delay at all in the context of clicking on websites, and is certainly nowhere near the 30 seconds you claimed. If you really are seeing 30 seconds of delay (I doubt you've actually measured it), then your computer hardware is definitely to blame, and not Amazon. This is a you problem, not an Amazon problem.
> surprisingly bad UX for a Big Tech consumer-facing company.
surprisingly bad ?
Ever saw a Microsoft product ? A Google product ?
I already set my preferencies. I don't want to set them a hundred times. No, i don't want a 1 px border, no title bar and no scrollbar.
Keep in mind that was written like 12 years ago, when things looked differently than they do now. (Notably that's pre material design and pre kennedy (the precursor to material))
article is from Aug 2005, so more like 19 years... I became suspicious when I was told the hero of the story left Amazon to join Yahoo!, which, y'know, was what people were wont to do back in the day, not so much anymore I'd guess
Good luck trying to find a thunderbolt cable unless you already know they categorize it as a "Mac Accessory - Charging Essentials". Lots of big carousels showing a sum total of 3 items at a time, and you better know the what a thunderbolt cable looks like, and don't mistake it for "merely" a usb3 charging cable. Or the 0.5m vs 1m.
Or using a mac - window management issues aside (it seems to encourage wasting screen space and peer at a tiny window in the middle of a massive screen....) - the "settings" app is a joke. A huge list of sections on the left, grouped seemingly arbitrarily, with a "Search" that only really works is you already know the exact wording of the option you're looking for. But hey, it's got fancy icons, so I guess that's nice.
This is all a bit tongue in cheek - using a mac to write this. All UX is "bad" in different ways IMHO. "Objectively best" UX is a pipe dream.
>Or using a mac - ...(it seems to encourage wasting screen space and peer at a tiny window in the middle of a massive screen....)
Isn't this normal on Windows too? The few times I have to use it, it seems I'm always doing something where some sub-window will pop up that I need to use or read, but the stupid thing is comically small, but needs to be scrolled (because there's too much content inside it for its size), but I'm forbidden from expanding the window size the way you can with normal windows.
This never happens to me in KDE: if a window is too small by default (which sometimes happens), I can always expand it.
I don't think I've ever clicked into a category on Amazon. I wouldn't need to know it's under Mac accessories, id just use the search bar and type in thunderbolt. In what world would clicking through a hierarchical UI be faster or easier than a search?
But, sorry for stating the obvious, the point of the Amazon web site is not to be convenient, or beautiful, or understandable. Its purpose is to sell you stuff, more stuff, more often. I bet some non-zero amount of confusion is increasing sales. It's much like supermarkets that put simple repeat-purchase items like milk at the other end of the store, to make customers walk and notice other items.
I would argue that Amazon's UI could garner more sales and reconstruct more of its slowly fading popularity if it simply stopped being such a dumpster fire of bad design, confusion, chaos, searches that barely work and of course, rampant links to attempts at sales fraud.
The confusion, coupled with Amazon's reputation inertia, does probably drive some bulk sales metrics, but it's a poison pill of scrounging what you can from existing users while potential new ones trickle away to sites that are simply better, easier to navigate and just less likely to sell you falsely packaged shit.
I'm dubious. Investing in development and design is one of the more expensive ways for a company to increase returns. Plus, Amazon are in such a dominant e-commerce market position [0] that they don't need to worry about UX beyond the ability to click "buy". Despite the poor design, their share is forecast to continue growing past 40% this year [1].
Yes but in arguing that you miss the point entirely. If Amazon's only motives in how it designes its customer interface are "hah, who gives a shit, we're so big that they'll keep buying even if we mistreat them and expose them to rampant fraud", that speaks of a shitty company which will eventually be eaten by the market. Consumer tolerance is finite, and smarter people than me have noticed the same thing and are working to chip away at it with shoppers.
I'd also hope for something a little bit more honest from Amazon, given all their multi-decade spiels about customer service, but here we are.
> that speaks of a shitty company which will eventually be eaten by the market
Agreed! Although "eventually" takes a very long time, possibly decades. In the meantime there are a host of cheaper tactics for maintaining and draining market share: acquisitions, exploitative terms (like Amazon's "best buy" conditions), regulatory capture etc... .
The issue is that the current system of shareholder capitalism strongly encourages short-term gains in lieu of sustainability. It's very difficult to justify spending on ease of use when you already have the users and there are things you could do to milk them like making ads more subtle.
Also, Amazon's retail empire is very much buttressed by its speedy delivery service, and thus warehouses and the truck fleet. A much better web site sometimes cannot compete with next-day delivery. (And sometimes can, of course!)
Etsy is pretty slick, although lately I've heard it has similar (but not the same) product/seller trust issues as Amazon.
iHerb has a somewhat Amazon-like UI but noticeably less cluttered and more structured. It highlights a minimum "best by" date, which is also a guarantee that goes beyond pure UI design but provides the kind of customer-friendly interface that informs rather than confuses.
I usually find Webstaurant product listings pretty informative and much cleaner.
From Big Tech, Google & Apple may not have a comparable physical goods marketplace but the marketplaces they do have - books and applications - are much cleaner and more focused than their Amazon counterparts.
Of course, the gold standard is McMaster-Carr.
I know Amazon is more successful than those examples. Microsoft Windows is also much more successful than its competitors, but I don't think most people would see that and argue "the point of an OS/software is not to be easy to use or convenient or understandable". Certainly it's not the point of Windows but I'd say that's because competition involves factors unrelated to quality and quality is multi-dimensional, not because quality is "not the point".
well, for anything related to photographic equipment, try B&H, much better. Decent search too and without all the ridiculous crud of false buy leads that Amazon dumps on you despite being one of the world's biggest companies and perfectly capable of providing a clean product search that¿s reliable.
Or, for random products, even damn walmart does better, and that's just embarrassing. I could go on with this but if you're defending the obviously broken interface that Amazon provides for buying products, I suspect your motives.
The one single thing I continue to use them for is book reviews, not buys, but just reviews, and that's a low bar indeed.
I'm an absolute nobody compared to Tesler, but I remember a hack week we did at Amazon probably in 2009 or so - the SVP (reporting directly to Jeff) was giving a little intro speech to what they hoped to see out of the hack week, and he ended with an admonition: "Whatever you do, DO. NOT. ATTEMPT. TO. REDESIGN. THE. SITE. UI.", and then later was told by him that Jeff will literally rip your head off if he caught wind of it.
One button mouse was one of the worst inventions. Mice need at least 3 buttons to be useful. You sometimes need to select something and sometimes you need to launch it. Or you need to select in several different ways. Because of this miss-invention they had to invent the double-click and all the complexity of timing (try making double click for someone old an nearing senile and someone young and fast with the same timeouts - this is often impossible)
I don't know what the ideal number of mouse buttons, but agree that it is more than one. The odd thing is it's appearance in the Macintosh: a computer where the user was supposed to orient most of their actions around the mouse.
That said, I'm pretty sure the double click came about because certain operations were costly. There is no particular reason why would couldn't use a single click to launch and application or load a document, except an accidental click would force the user to wait for the process to complete (and it could take an unreasonably long period of time on older systems). Assigning that function to a secondary button would potentially make the problem worse since the user would have to keep the functions straight while learning the system, while accidentally clicking the wrong button would be more frequent than accidentally clicking a button.
I used a (OS 7/8/9) Mac for years with a one button mouse. I was very productive with it! I remember seeing Windows 95 with its right-click context menus but I wasn’t very interested at the time. I was happy using keyboard shortcuts and the like.
I believe Steve Jobs was pretty adamant about keeping the one button mouse on the Mac for years and it was absolutely the right call. The Mac was way easier to use back then!
I am not doubting you were productive with it. However that does not make it better, it just makes it what you know. User experience experts have studied this for a long time, and they all conclude that you need more than one button. (there isn't agreement on the best number as there are some trade offs, but one is clearly not enough)
The world has a real problem with people arguing what they know is best without any basis in reality. Even if there is a clear reason one choice is better, most arguments for the better choice end up being because they know it better not the clear reasons it is better (see most metric/imperial arguments)
I have some older relatives who were actually able to use a computer back then, thanks to the simplicity and brilliant design of Classic Mac OS. Now they are essentially shut out of computing for the rest of their lives. They can just barely use their cell phones to make calls and send the odd text. They do all their banking over the phone.
There are tons of people like this. Apple used to be the absolute market leader at making computing accessible for everyone. At some point they got big enough and powerful enough that they could ignore all that and just let their dev teams do whatever they want. They gave up on trying to make real computing accessible to the masses and just pushed all these users to the iPad.
While Apple's ability to create "simple and brilliant" designs in the past can be attributed to the motivation and talent of their staff, I don't think you can say their current failure to do so implies the opposite. We live in a very different world. Computers are expected to do more and, regardless of how much Apple despises it, computers are expected to interact with other systems.
Just think of Hypercard. Many people here will talk about how great it was, and it was great. Yet the most talented developers and designers in the world couldn't recreate it in a form that is both simple and reflects the needs of the modern world. It would always end up lacking essential features or be burdened by an overabundance of functionality.
What about having 2 buttons vs 1 button makes something "inaccessible"? If anything having 2 buttons makes the options that are in the context of the current application more accessible.
But people are incurious, stupid, and just dull-witted, I guess? So you really want the rest of us to have to suffer with 1 button because the lower end of the bell-curve can't handle 2 buttons for reasons related to stupidity?
People who don't come to use a computer for the first time until middle age or later tend to struggle mightily. I'm not sure why, but they seem to have a strong aversion to experimenting with the system. I face this issue almost every day with my 74-year-old father and his iPhone. He can only do the tasks which I explicitly teach him and anything new (like changing a setting he's never changed before) requires me to show him the steps. The fact that changing a different setting in the Settings app is an almost-identical process never occurs to him. He just asks for help every time.
Back in the Classic Mac OS days he did just fine with a 1-button mouse. He was able to click on and interact with everything and got what he needed out of the computer. A 2-button mouse is just utterly baffling to him. It turns every single thing on the screen into a fork in the road: do I left-click this or right-click it?
My 72 year old mother installs her own video cards and has a mouse with 5 buttons. YMMV, I guess.
The 1 button mouse was not superior in any way, and the people that get confused by a mouse with more than 1 button probably should never look at a web page with more than one thing to click. Did your father ever drive a car? Is he aware there's more than one pedal and lever to use? Did he ever find his way to the windshield wipers or did you just drive around with the rain obstructing the view? I'm genuinely curious how such a person could function in this world in the last 80 years.
The pedals in a car always do the same thing every time you press them, in every car you get into. Mouse buttons are not like this. Different applications use them differently. There may be a sizeable chunk that are reasonably consistent but there are tons of outliers that do all kinds of bizarre stuff like using right click to select or to cancel an operation or to bring up a tool tip. Sometimes it’s left click to select and right click to move.
Plus you never know what’s going to be in a context menu until you right click to open it. Sometimes you move the mouse slightly and right click something else and get a different context menu. For an older person with declining vision this can be very confusing. Fixed menus at the top of the screen are discoverable. You can even search for what you want in the help menu. Context menus are not discoverable.
I’m glad your mother knows how to install video cards. My aunt worked at a TV factory installing boards into the case and soldering all the through-hole components to the board. She’s still pretty baffled by computers but she’s almost 80 years old and rarely needs them.
You have to be willfully ignorant or brain damaged to not understand how a 2-button mouse works after the first time using it. The context menu does different things depending on the context. Someone that can't understand that is seriously in trouble in life. It's not something that should be confusing at all.
And as I said, hopefully your father never visits any websites, because they are all different with information in different places on every website. The world must be an extremely frustrating and hostile place for someone that gets confused by a 2 button mouse. I honestly feel bad for your father if everything you say is true.
I don't think it was about him saying 'one button is optimal for all time always' I think a lot of it was 'this is a whole new paradigm for people who only know the typewriter, let's make it simple for them, down the road when people have adjusted to the simple case, we can make it more elaborate'
Hell at the time when this happened most computer joysticks had only one button, or in the case of one of the popular joysticks at the time, three buttons that were all just wired to one input.
"... keyboard shortcuts ..." is a common response to defend Mac design choices.
The single shared menu is also something that made sense on the original 9" 512x384 Mac screen to save but it really is nonsensical in the days of 32" 6k displays, so much mousing to get up to that menu but of course "... keyboard shortcuts ..." comes the refrain.
The single shared menu bar has one huge advantage over per-window menu bars: infinite mouse target size along the vertical axis. When moving the mouse to a narrow strip menu bar at the top of a window you need to accelerate the mouse towards the target and then decelerate in time to stop on the target without overshoot. With a menu bar at the top of the screen you can skip the decelerate part and just slam the mouse to the top of the screen without worrying about overshoot.
You’re right about giant displays though. The best menu system for those is pie menus [1]. Although I would still dispute the advantages of the second mouse button for activating those. The F1-F12 function keys would be much better since you could have instant access to 12 different pie menus instead of a single one with right click.
To be fair, people also had to be taught how to point with a mouse.
The double-click also wasn't essential, you could perform all actions using the one-click menus. The double-click was introduced as a shortcut. From the Apple Lisa Owner's Guide:
The File/Print menu contains all of the commands you need for creating, opening, closing, and storing your documents. Because you use these commands so frequently, the Office System includes a simple shortcut for performing these tasks: clicking the mouse button twice.
To tear off a sheet of stationery, click twice rapidly on the stationery
pad icon.
To open an icon into a window, click twice rapidly on the icon.
To close an open window, click twice rapidly on the window's title bar icon.
Clicking twice to close a window can either set aside the object or save and put away the object, depending on where the object's shadows are. If there is a shadow on the desktop, clicking twice causes the object to be set aside. If the only shadow is in a folder or on a disk, clicking twice summons a dialog box, which asks you whether you want the object set aside or put away.
Pointing with the mouse is discoverable though. If you start moving it - which odds are you will do sometime (even by accident) you will see the pointer moving and eventually figure it out.
You are very unlikely to discover the double click by accident.
I think it’s a spectrum. Unless the double-click delay is ultra short, I’m pretty sure you would discover it sooner or later. My point was that you couldn’t expect someone to learn how to use a computer without any training or instructions, so if you need that anyway, you can also include less discoverable features in it.
There is a trade-off between feature sets that are useful if you know them and super discoverable feature sets, in the sense that the reason some feature is more efficient can also make it less discoverable.
The original Macintosh came with interactive tutorial software demonstrating the use of the mouse and giving you practice manipulating it. The double-click was part of this instruction.
Less familiar users are often also confused whether something requires double-click or not, double-clicking web links for example. What's worse is that is often not immediately noticeable, leading to opening same program multiple times or breaking some submission form.
The Alto used a three-button mouse. It was Charles Irby and Dave Smith who decided it would be two for the Star. It was a bitter debate, but they won.
When it was three, every Alto program had its own set of conventions for them. There was no way that could have been unified for a multi-purpose computer.
All lab GUIs starting from Engelbart’s AUGMENT had 3-button mouses, including the Xerox Alto. The researchers and engineers using the Alto found that the mouse buttons were confusing, even this small group couldn’t remember the combinations in each application, and it lead to many inconsistencies between apps. Each button had a very precise function, and they were as much modal shortcuts than direct manipulation buttons.
Even Xerox tried to reduce it to one button for the Star (their first commercial GUI computer), but from their own published account they couldn’t find a way and the Star shipped with a 2-button mouse.
Remember, at the time most people targeted by Apple (or other computer manufacturers) had never used a computer, and the people who did use a computer never used a mouse! (Except for the PARC researchers and some other researchers, so maybe 2000 people worldwide)
It’s people coming from Xerox to Apple who were the most interested in having a one button mouse! They knew the 3-button mouse would confuse users and was entirely unnecessary for most users (as it is today, the secondary click being reduced to just a convenient shortcut, nothing more)
> One button mouse was one of the worst inventions. Mice need at least 3 buttons to be useful.
I think the way the article lays out the invention shows that it was a good idea at the time. Apple sticking to a single button mouse was a bad idea. But it wasn't like Tesler saw three buttons and said "If I remove two, I'm a genius".
I don't know what you do with your third button, but I have never used it. It's not a core interaction in any interface I've seen. On my windows machine it does all sorts of unexpected things from starting a scroll mode, dragging items (very rarely) to closing tabs, to nothing on most things. It's not useful. Now the back and forth buttons are super handy, but I wouldn't say they are essential to a mouse.
As other commenters have noted, it was from actual user testing, and from the experience of the Xerox alumni. Definitely the right call in 1983.
Today of course Apple uses trackpads with "zero" buttons (though the trackpad is clickable in various ways) and a lot of non-obvious (though generally well-designed) gestures like two-finger scrolling, pinch-to-zoom/rotate, etc.
I wonder why they went with WYSIWIG in the title despite having WYSIWYG in the article.
Anyway, hard to blame the folks who invented it, since it was early days, but WYSIWYG was a truly terrible idea. It heavily implies the need (although, doesn’t technically demand it) to have user input produce only local changes, so we’ve been cursed with all these office documents with terrible spacing. It also ruins our ability to actually communicate with the computer, or describe things on an abstract level. People just poke their documents around until they get something reasonably sensible looking in their current editor.
Is the text reflowed around the figure or did the user just manually add a bunch of line breaks and then manually paste in the figure (anchored to what?). We’ll out later if somebody changes the font.
Maybe WYSIWIG almost works, actually. What you see is… whatever I got. Except it only works if we have the same version of the same office suite.
WYSIWYG democratised computerised printing and other areas, arguably providing the backbone of the PC revolution.
You're mistaking Microsoft's specific implementation fumbles with an interaction mode that helps billions of people every day. Don't throw the baby out with the bathwater, there are plenty of good WYSIWYG implementations out there.
I agree, WYSIWYG is not a good idea, and a one button mouse is not a good idea.
You can have print preview if you want to preview the page layout. This is also faster and more efficient than reflowing the text as it is being typed, anyways.
Fascinating.. you are aware of the context of how WYSIWYG came about?
So presumably you don't want to return to that state, where you literally would have no idea what it would look like until several minutes later when it finally came out on the printer?
Can you explain a little bit further what your ideal paradigm is?
> where you literally would have no idea what it would look like until several minutes later when it finally came out on the printer?
That is why print preview is a good idea (which is possible with most modern computers; this can be done independently of WYSIWYG editing). For example, if you write a TeX file and then make the DVI file and use xdvi or another previewer to display it on the computer before you print it on paper.
Reveal Codes would be another possibility, perhaps in combination with a "partial WYSIWYG" editor which does not display reflowing etc in the editor and only in the preview; if you use Reveal Codes then formatting codes are displayed (e.g. bold, italics, etc), but you can also display the bold, italics, etc directly during editing. This can be a in between way, which gives you some of the benefits of WYSIWYG and some of the benefits of non-WYSIWYG.
while writing documentation in Markdown, I almost always have the preview open in split-screen in vscode, not because I need it, but mostly to make sure that the other people reading my work have a good experience.
Ideally people would write mark-up in text editors. If you want to be very nice, I guess it would be OK to have drag-and-drop WYSWYG environments that spit out the markup code, but there should be a very high priority on making sure the code produced is human-readable.
Mark-up is something that ordinary users will never understand. "Good enough" is what they want. Printing it to check that it came out right is just fine with them. It is simply not worth it for the software to try to make everything perfect.
Case in point: my book was edited in Vellum, and it can generate PDF and EPUB. The PDF had a widow (one line at the top of the last page of the chapter), and Vellum just omitted that one-line page. To the reader it seemed like a typo (which it was, in a sense).
I "fixed" it by removing a few words up above it, so everything fit on the last full page. But it was only that I knew about widow/orphan control that I could figure that out. Just imagine how much trouble it would be to make things perfectly WYSIWYG on every printer and every type of document.
It amazes me that something as simple and obvious as cut and paste had to be invented. Even more amazing that we can actually point to the person that did it.
Indeed! I'm writing an app that resizes and moves shapes on a canvas (among other things) and I'm amazed at how many trivial little things I had to write that everyone including me would take for granted, including copying and pasting, drawing the little handles to resize the shape, changing the cursor based on what's below it (the handles or the shapes), drawing a translucent version of the shape when it's being moved/resized, changing the position of the shape when it is resized from _some_ of the handles but not all (top left vs. bottom right)...
My experience is that undo/redo isn't something you tack on at the end of the project, it has to be baked into the infrastructure. Every action you take has to be delegated to an object that has all the information it needs to both do the action and undo it. Then you can just keep a stack of those objects and call them as necessary.
Fortunately I'm using iced which is an implementation of The Elm Architecture so undo/redo is just a matter of keeping track of which messages were sent and reversing them as needed.
The alternative, as used e.g. by the Xerox Star, was select-and-copy/move. The advantage over cut-and-paste is that you don't have invisible fragile state.
However, the Star implementation had copy and move modes (select source, COPY, mouse to destination, CLICK) and Tesler hated modes. I don't know why Star didn't use the modeless version (select source, mouse to destination, COPY).
I love that piano like chorded keyboard (https://en.wikipedia.org/wiki/Chorded_keyboard#/media/File:X...) from the alto. I think it's still and interesting UI concept but I think it should/could be adapted to a foot pedal design, chords would be constrained to 2 inputs though unless maybe two of the inuputs were directly side by side then you could expand to three. Organists know what's up ;)
To be able to do copy (edit) paste (edit) paste you need independent storage of what’s copied. Move requires new UI sate where cut is just Copy (Edit:Delete). With Undo cut is safe enough and there’s likely something more critical to work on than a Move UI.
Also, I’ve seen some terrible move UI. It may seem cool to have a big floating blob of text follow the curser but that doesn’t work well when you want to move multiple pages or across multiple pages.
If you want to replace by pasting, you'd lose the first selection by selecting what you want to have replaced. Which means you'd need different selection modes depending on whether you are selecting the source or the target of the copy/move. Furthermore, mainting the original (source) selection while preparing the insertion point (target) of the copy/move is also fraught with some fragility.
> If you want to replace by pasting, you'd lose the first selection by selecting what you want to have replaced.
Yes, that's the tradeoff; you'd have to delete that separately. This would be quite close the X11 primary selection and middle-click paste. I think that works reasonably well on its own, but trying to provide both models as X11 does is a mess.
I use cut and paste pretty often. Besides giving visual feedback that the operation actually worked, it also makes it easy to move things around, including between files.
Which I found flabbergasting at the time, because it had been a standard feature on PDAs ten years prior. I only bought an iPhone once it gained cut&paste support.
Cut and paste is a sort of obvious miss, but in general, I think Smartphones benefitted from not taking for granted the features of PDAs. There was always something deeply niche about the things.
I think in general we are losing a lot of functionality especially since the phone UIs are slowly creeping into the desktop. Discoverability and consistency are simply horrible compared to how things worked around 2000. I think it's a huge regression.
I can't wait until somebody dusts off the design principles of Windows 95/2000 or Mac System 7 and will sell this as the new UX paradigm.
> I think in general we are losing a lot of functionality especially since the phone UIs are slowly creeping into the desktop. Discoverability and consistency are simply horrible compared to how things worked around 2000. I think it's a huge regression.
Indeed. Remember when every icon had a tooltip that told you what it would do? Remember when it shipped with a book that also told you what each thing did?
I recently used an app that was a unified phone/pc interface and I was pretty sure that somewhere in a list of icons was a thing I wanted, but wasn't sure which. I picked the wrong one and then had to figure out howto undo what I had just done.
Who needs to waste time with manuals when you can just Google what you want to do and watch a teenager deliver a three-minute monologue with 15 seconds of actual (but incorrect) content?
There are more misses. For example I found it surprising that they didn’t include a universal “context menu” equivalent (long press would have been obvious) and a universal menu bar equivalent (like Palm OS did). Stuff like this is why we still have an awfully complex and inconsistent UI landscape on mobile.
I think I lightly disagree. Phones are just not good for complex use-cases. I don’t want a context menu on my phone, the depth of interactions in a browser for example should be… slide the webpage this way, slide it that way, poke a link (or, I guess, to be leave room for what I’m doing now, poke a text box to write in it). Dumbing down the UI was a good idea.
We do have all sorts of inconsistent “context menus” now on mobile. Sometimes after you select something, sometimes as items under the share button, sometime when you actually long press, sometimes a menu appears when you tap an item, sometimes as action items that appear when you slide an item to a side. And even for a single of those variations, different variants with different looks exist, etc. A uniform way to “show me all actions I can perform on this item” would be greatly beneficial.
Actually, this conversation has made me realize I only do this sort of “give me more options” interaction in Safari (long press) and Panic Prompt (double tap). I think I hadn’t noticed the inconsistency because 2 is not very many, and also the Panic Prompt behavior is a sort of nice analogy to the typical Linux terminal behavior.
Still though, only two programs and the inconsistency is immediate, haha.
webOS, the poster child for simple and consistent UI, did all of this.
Much like the Amiga, this OS is always imitated, never copied, even though Android should have thrown out everything after Honeycomb to adopt what it brought to the table.
Yeah i think this was a UI issue more than a 'we don't think it's a needed feature' issue. Long press with that sticky popup was just something that hadn't come to yet... and certainly forcetouch tech didn't exist yet.
My dad grew up on a farm, and latter regretted not inventing the large round baler that most farmers use - he already knew about small round bales, so the only thing missing was make them larger and then haul them on a tractor instead of lifting by hand as you did the small bales. Despite saying the above for years it never occurred to him to invent the large square baler which the same concept (haul with a tractor), but stack better. Everything was known and so obvious in hindsight.
Often the obvious stuff was invented decades ago, but some old people in power persistently refuse to implement into major products. Like the ability to copy multiple things without overwriting the previous entry. Who would ever need that?!
>> something as simple and obvious as cut and paste had to be invented.
Which is was. A few hundred years ago. Cut-and-paste began as a manual process. Arranging material for printed often involved very literal cutting and pasting of text and images. Entire trades (typesetters) were dedicated to the task. A more accurate description of Tesler's contribution was that he was the first to implement the concept in the digital realm. The person who "invented" the delete key did not invent the concept of deleting a character.
I'm not so sure. The art of typesetting something like a newspaper page doesn't exist for most people. They see oldschool wooden printing press ... big gap ... then bubblejet printers. I know people who think newspapers were somehow silkscreened. The idea that someone in the mid-20th would glue bits of text to a page, which was then transformed into a metal printing plate, is a process most do not appreciate.
Yeah, in the early aughts I had to explain phototypesetting to some students, they just thought everyone used metal type or it's evolution, the typewriter, until the advent of the computer.
Demonstrably untrue, since humanity was made up of people who lived happily without those things and never even thought of them. Probably even laughed at them when they first appeared.
There's also Larry Wall's "Waterbed Theory", effectively that complexity cannot be squashed down: it will out. Though I suspect this post-dates Tesler.
I think it's a fantastic way to describe his accomplishments, it gives context to how early and groundbreaking his work was in a way that even the least tech-savvy can understand. Everyone knows what cut and paste are, no one thinks about the fact that someone had to come up with it.
Limited, and also wrong -- Tesler didn't invent any of those. They existed already by 1973 (supposed date of these inventions at Xerox PARC); e.g. TECO from MIT and E from Yale had functionality for cutting/pasting, replacing strings, etc.
The wording of the article suggests that he came up with the term "cut and paste", rather than the concept:
> In 1969 Tesler volunteered to help create a catalog for the Bay Area’s Mid-Peninsula Free University. He and Jim Warren, founder of the West Coast Computer Faire, did the paste-up for that catalog. Around the same time, Tesler saw a demo of a computer command that allowed you to bring back something that you had deleted. The command was called “Escape P Semicolon” (or something similarly arcane). Several years later, when Tesler was at Xerox PARC writing a white paper about the future of computing, he drew on the memory of those two experiences to predict that you would be able to “cut and paste” within computer documents.
Also, I don't think it particularly counts as an invention as it was heavily used in publishing well before the computer era, and was just shifting to a computing context and re-using the same metaphor. For a long time, text was printed in sections, physically cut up and pasted to a board, and when the entire page was assembled it was photographed to create a negative that was used to print the newspaper.
Just to be clear that I'm not intending to disrespect his work, just arguing the semantic meaning of "invention" with respect to this. His obsession with mode-less user interfaces and user-facing simplicity is far more significant a contribution to society in general (and ironically, cut-and-paste is almost the antithesis of his main philosophy as the once-cut data becomes hidden state - it'd be a better metaphor to highlight the data and physically move it around the document).
Because you've had your head under a rock? It was headline news when he died (which was after this was published).
> “And the question I remember most was from Steve Jobs. He said, ’You guys are sitting on a gold mine here. Why aren’t you making this a product?’”
Xerox WAS making it into a product (the Star). Of course Larry couldn't tell him about that. It failed, just like the Lisa did.
> As one of Tesler’s first tasks at PARC, he and a co-worker wrote a paper on the future of interactive computing, which for the first time talked about cut-and-paste as a way of moving blocks of text, images, and the like. It also described representing documents and other office objects stored on the computer as tiny images—icons—instead of as a list of names [see photo, ].
The "co-worker" was David Canfield Smith, who was directly involved in the Star, unlike Larry.
> He even convinced Apple to invest in a newly created company, Advanced RISC Machines Ltd., also in Cambridge, that would produce them.
And that stake was quite possibly crucial in helping Apple survive.
> Plus it's on record that Apple made a total of $1.1 billion out of selling those shares, which represented a profit of 366 times its original investment. That money helped Apple survive, and Jobs decision to cut the Newton — with its ARM processor — was also part of the surgery needed to keep Apple alive.[1]
Which makes it impossible to replace a selection by pasting. Also, except for terminals, it typically pastes at the pointer location, so you need precise aim (emacs thankfully lets you customize this, but UI toolkit widgets usually don’t)
> Which makes it impossible to replace a selection by pasting.
In principle this is false with a Plan9-like model of mouse chording. Holding left click over a selection and tapping middle click is a reasonable solution.
Yeah, and I keep pasting the wrong thing because the terminal emulator tries to simulate the X behavior (which is very useful) but doesn't maintain a separate buffer like X does.
The comments show a lot of confusion about what Tesler invented. Other industries did indeed use cut/copy/paste and older editors had ways to do these functions. But the cursor in these editors normally indicated a character. Larry figured out that if instead the cursor indicated the space between characters and he sometimes had a second such cursor to indicate all characters between them then he could do what previous editors needed various commands with a single operation: replace the selection with what has just been typed and move the cursor to right after that. "paste" would the just the equivalent of retyping a previous selection that had been either "cut" or "copied".
If the two cursors were at the same spot (just a blinking vertical bar) then you are inserting text as you type it. If there was some selected text then you are replacing it and then inserting anything more you type. And so on.
Both at Xerox Parc and at Apple he actually tested his ideas on potential users and often found he guessed wrong about what would work and what wouldn't. He would then try something else.
> Tesler registered a strange combination of sensitivity to people and fascination with math. The best career choice the counselor could suggest was working as an architect or maybe becoming a certified public accountant.
A CPA??
Im glad to see that counselors have always been terrible?
I wrote a remembrance when he died a few years back. The timing of this story (2005) is a little unfortunate, because the genius of the Newton investment only showed itself later, even with its failure: https://www.vice.com/en/article/n7jdgw/larry-tesler-the-inve...
See, Apple invested in ARM because of the Newton, which means they held Newton stock. And on top of the fact that this gave Apple an inside line/competitive advantage with ARM that we’re still seeing today, it also meant that Apple owned ARM stock—and could sell it. When the company was near its nadir in the late 1990s, it nursed itself back to health by selling shares of ARM.
So even Tesler’s biggest failure was a stroke of genius.
I have an old messagepad (110) and ran across it a few months back when I was going through some of my storage boxes. Plopped some AA batteries in it and it booted right up.
Still a great user interface and the handwriting recognition still works great (though it is a little slow).
Very much ahead of it's time, the early palm era was such a massive backslide (aside from size and price).
The one button mouse was significantly underrated. Using the keyboard keys as modifier keys to the mouse was ergonomically great, and anybody who complains about the mouse seems to never really understand how that system worked.
Different people's brains work differently, essentially innately. And skilled trained brains work differently than the same brain did green. It doesn't seem that Tesler's work ever reflected these important details about the world.
I expect if tasked, Larry Tesler would have "invented" the one-button game controller: fuuuuuunnnnnn (Joe Biden can't get enough of his!)
“no modes”, i’ve always considered that to be a bit of a mantra worth following. but now i seem to be breaking that rule while learning Vim (normal mode, insert mode, and so on).
yesterday i was test driving a car with eco mode, sport mode..the Larry in me was yelling “no modes”!!!
I also use vim (or neovim). But that doesn't mean that I believe in modes or that neovim/vim is a good editor.
I think there is some kind of psychological thing driving this. Like subconsciously, I came to the conclusion many years ago that "real programmers" use vim or Emacs, and then consciously decided that the default keybindings for Emacs were slightly worse.
So for decades I have been trying to learn just enough vim to get by. But practically every day I miss my PC keys for things like selecting text.
At least three times I have got my keybindings the way I wanted and then after a new install or something just decided to deal with the outdated way that vim does it.
You have to realize the context that vim was invented. There was no WYSIWYG. People were used to things like 'ed' where everything was a command. Just being able to stay in a mode and move around freely on the screen was a big deal. The terminal hardware didn't even have a way to hold a key combination.
Vim modes allow you to keep your hands on the home row most of the time and make a mouse unnecessary for editing. That keeps my hands, wrists and forearms healthy and for that I am grateful. Of course a great programmer is not defined by their tools. What matters is what you create, not how you create it.
WYSIWYG is an acronym for "What You See Is What You Get" that refers to software which allows content to be edited in a form that resembles its appearance when printed or displayed as a finished product, such as a printed document, web page, or slide presentation.
... For anyone else who didn't want to look it up.
Just as others have pointed out that cut-and-paste was a term around a long time before this reference, so too was WYSIWYG. The Dramatics had a top popular song in 1971[0] using the same phrase as its title (albeit spelled slightly differently).
I hope we don't hear next about the computer hero who "invented" the term "desktop", or "folder".
"Cut and paste" was of course a term used with paper before computers, but arguably the computer version of it is not quite the same, because you have that hidden buffer ("clipboard") and can usually paste the same cut item multiple times. Adapting the physical-world cut-and-paste process to the computer realm can count as an invention.
His Amazon stint was mentioned in Steve Yegge's famous "Platforms" rant [1] where the reason for his departure was described less amicably:
> Jeff Bezos is an infamous micro-manager. He micro-manages every single pixel of Amazon's retail site. He hired Larry Tesler, Apple's Chief Scientist and probably the very most famous and respected human-computer interaction expert in the entire world, and then ignored every goddamn thing Larry said for three years until Larry finally -- wisely -- left the company. Larry would do these big usability studies and demonstrate beyond any shred of doubt that nobody can understand that frigging website, but Bezos just couldn't let go of those pixels, all those millions of semantics-packed pixels on the landing page. They were like millions of his own precious children. So they're all still there, and Larry is not.
The true reason for his departure is just a subject for gossip, but even today I agree that the Amazon store UI is confusingly dense and complicated, surprisingly bad UX for a Big Tech consumer-facing company.
[1] https://gist.github.com/chitchcock/1281611
My main gripe with Amazon is how slow it is. It takes me like 30 seconds to change my shipping address and payment method before placing an order. I always assume it’s because they’re using Glacier on AWS to drive that functionality.
That sounds more like your hardware/internet than it does Amazon's fault - I have never had any delay when changing address or payment method while placing an order, and I place a lot of orders on Amazon.
Fiber 2000Mbps up and down ~5ms latency
It’s slow regardless of my internet connection
I timed it on my computer, in between my clicks there was about 4 seconds delay total when changing payment method and changing shipping address. Your "30 seconds" delay is definitely something specific to your computing platform. Maybe try upgrading your Pentium 1 computer?
Do I break your balls too and say “I’ve never had any delay” and “there was about 4 seconds delay” are mutually-exclusive?
Do I break your balls further by pointing out there was never a “Pentium 1” CPU?
>Do I break your balls further by pointing out there was never a “Pentium 1” CPU?
You could try to be pedantic to distract from your obvious problem that your computer hardware is slow. "Pentium 1" means the original "Pentium" CPU, it is meant to disambiguate because "Pentium" could refer to Pentium 2, or Pentium 3, but I wanted to be clear that your computer is as slow as an original Pentium computer. But sure, go ahead and be pedantic instead of admitting your computer is slow.
>Do I break your balls too and say “I’ve never had any delay” and “there was about 4 seconds delay” are mutually-exclusive?
I said 4 seconds *TOTAL* delay. So that's 2 seconds delay across two actions, which I don't consider to be any real delay at all in the context of clicking on websites, and is certainly nowhere near the 30 seconds you claimed. If you really are seeing 30 seconds of delay (I doubt you've actually measured it), then your computer hardware is definitely to blame, and not Amazon. This is a you problem, not an Amazon problem.
I literally chortled reading that. Thank you for making my day
> surprisingly bad UX for a Big Tech consumer-facing company.
surprisingly bad ? Ever saw a Microsoft product ? A Google product ? I already set my preferencies. I don't want to set them a hundred times. No, i don't want a 1 px border, no title bar and no scrollbar.
Keep in mind that was written like 12 years ago, when things looked differently than they do now. (Notably that's pre material design and pre kennedy (the precursor to material))
article is from Aug 2005, so more like 19 years... I became suspicious when I was told the hero of the story left Amazon to join Yahoo!, which, y'know, was what people were wont to do back in the day, not so much anymore I'd guess
Or an apple product? :P
Good luck trying to find a thunderbolt cable unless you already know they categorize it as a "Mac Accessory - Charging Essentials". Lots of big carousels showing a sum total of 3 items at a time, and you better know the what a thunderbolt cable looks like, and don't mistake it for "merely" a usb3 charging cable. Or the 0.5m vs 1m.
Or using a mac - window management issues aside (it seems to encourage wasting screen space and peer at a tiny window in the middle of a massive screen....) - the "settings" app is a joke. A huge list of sections on the left, grouped seemingly arbitrarily, with a "Search" that only really works is you already know the exact wording of the option you're looking for. But hey, it's got fancy icons, so I guess that's nice.
This is all a bit tongue in cheek - using a mac to write this. All UX is "bad" in different ways IMHO. "Objectively best" UX is a pipe dream.
>Or using a mac - ...(it seems to encourage wasting screen space and peer at a tiny window in the middle of a massive screen....)
Isn't this normal on Windows too? The few times I have to use it, it seems I'm always doing something where some sub-window will pop up that I need to use or read, but the stupid thing is comically small, but needs to be scrolled (because there's too much content inside it for its size), but I'm forbidden from expanding the window size the way you can with normal windows.
This never happens to me in KDE: if a window is too small by default (which sometimes happens), I can always expand it.
I don't think I've ever clicked into a category on Amazon. I wouldn't need to know it's under Mac accessories, id just use the search bar and type in thunderbolt. In what world would clicking through a hierarchical UI be faster or easier than a search?
But, sorry for stating the obvious, the point of the Amazon web site is not to be convenient, or beautiful, or understandable. Its purpose is to sell you stuff, more stuff, more often. I bet some non-zero amount of confusion is increasing sales. It's much like supermarkets that put simple repeat-purchase items like milk at the other end of the store, to make customers walk and notice other items.
I would argue that Amazon's UI could garner more sales and reconstruct more of its slowly fading popularity if it simply stopped being such a dumpster fire of bad design, confusion, chaos, searches that barely work and of course, rampant links to attempts at sales fraud.
The confusion, coupled with Amazon's reputation inertia, does probably drive some bulk sales metrics, but it's a poison pill of scrounging what you can from existing users while potential new ones trickle away to sites that are simply better, easier to navigate and just less likely to sell you falsely packaged shit.
I'm dubious. Investing in development and design is one of the more expensive ways for a company to increase returns. Plus, Amazon are in such a dominant e-commerce market position [0] that they don't need to worry about UX beyond the ability to click "buy". Despite the poor design, their share is forecast to continue growing past 40% this year [1].
[0] https://www.statista.com/statistics/274255/market-share-of-t... (I'm not sure why ebay isn't in this chart?)
[1] https://www.emarketer.com/content/amazon-will-surpass-40-of-...
Yes but in arguing that you miss the point entirely. If Amazon's only motives in how it designes its customer interface are "hah, who gives a shit, we're so big that they'll keep buying even if we mistreat them and expose them to rampant fraud", that speaks of a shitty company which will eventually be eaten by the market. Consumer tolerance is finite, and smarter people than me have noticed the same thing and are working to chip away at it with shoppers.
I'd also hope for something a little bit more honest from Amazon, given all their multi-decade spiels about customer service, but here we are.
> that speaks of a shitty company which will eventually be eaten by the market
Agreed! Although "eventually" takes a very long time, possibly decades. In the meantime there are a host of cheaper tactics for maintaining and draining market share: acquisitions, exploitative terms (like Amazon's "best buy" conditions), regulatory capture etc... .
The issue is that the current system of shareholder capitalism strongly encourages short-term gains in lieu of sustainability. It's very difficult to justify spending on ease of use when you already have the users and there are things you could do to milk them like making ads more subtle.
What are these sites that are simply better?
Also, Amazon's retail empire is very much buttressed by its speedy delivery service, and thus warehouses and the truck fleet. A much better web site sometimes cannot compete with next-day delivery. (And sometimes can, of course!)
Etsy is pretty slick, although lately I've heard it has similar (but not the same) product/seller trust issues as Amazon.
iHerb has a somewhat Amazon-like UI but noticeably less cluttered and more structured. It highlights a minimum "best by" date, which is also a guarantee that goes beyond pure UI design but provides the kind of customer-friendly interface that informs rather than confuses.
I usually find Webstaurant product listings pretty informative and much cleaner.
From Big Tech, Google & Apple may not have a comparable physical goods marketplace but the marketplaces they do have - books and applications - are much cleaner and more focused than their Amazon counterparts.
Of course, the gold standard is McMaster-Carr.
I know Amazon is more successful than those examples. Microsoft Windows is also much more successful than its competitors, but I don't think most people would see that and argue "the point of an OS/software is not to be easy to use or convenient or understandable". Certainly it's not the point of Windows but I'd say that's because competition involves factors unrelated to quality and quality is multi-dimensional, not because quality is "not the point".
well, for anything related to photographic equipment, try B&H, much better. Decent search too and without all the ridiculous crud of false buy leads that Amazon dumps on you despite being one of the world's biggest companies and perfectly capable of providing a clean product search that¿s reliable.
Or, for random products, even damn walmart does better, and that's just embarrassing. I could go on with this but if you're defending the obviously broken interface that Amazon provides for buying products, I suspect your motives.
The one single thing I continue to use them for is book reviews, not buys, but just reviews, and that's a low bar indeed.
I'm an absolute nobody compared to Tesler, but I remember a hack week we did at Amazon probably in 2009 or so - the SVP (reporting directly to Jeff) was giving a little intro speech to what they hoped to see out of the hack week, and he ended with an admonition: "Whatever you do, DO. NOT. ATTEMPT. TO. REDESIGN. THE. SITE. UI.", and then later was told by him that Jeff will literally rip your head off if he caught wind of it.
One button mouse was one of the worst inventions. Mice need at least 3 buttons to be useful. You sometimes need to select something and sometimes you need to launch it. Or you need to select in several different ways. Because of this miss-invention they had to invent the double-click and all the complexity of timing (try making double click for someone old an nearing senile and someone young and fast with the same timeouts - this is often impossible)
I don't know what the ideal number of mouse buttons, but agree that it is more than one. The odd thing is it's appearance in the Macintosh: a computer where the user was supposed to orient most of their actions around the mouse.
That said, I'm pretty sure the double click came about because certain operations were costly. There is no particular reason why would couldn't use a single click to launch and application or load a document, except an accidental click would force the user to wait for the process to complete (and it could take an unreasonably long period of time on older systems). Assigning that function to a secondary button would potentially make the problem worse since the user would have to keep the functions straight while learning the system, while accidentally clicking the wrong button would be more frequent than accidentally clicking a button.
I used a (OS 7/8/9) Mac for years with a one button mouse. I was very productive with it! I remember seeing Windows 95 with its right-click context menus but I wasn’t very interested at the time. I was happy using keyboard shortcuts and the like.
I believe Steve Jobs was pretty adamant about keeping the one button mouse on the Mac for years and it was absolutely the right call. The Mac was way easier to use back then!
I am not doubting you were productive with it. However that does not make it better, it just makes it what you know. User experience experts have studied this for a long time, and they all conclude that you need more than one button. (there isn't agreement on the best number as there are some trade offs, but one is clearly not enough)
The world has a real problem with people arguing what they know is best without any basis in reality. Even if there is a clear reason one choice is better, most arguments for the better choice end up being because they know it better not the clear reasons it is better (see most metric/imperial arguments)
I have some older relatives who were actually able to use a computer back then, thanks to the simplicity and brilliant design of Classic Mac OS. Now they are essentially shut out of computing for the rest of their lives. They can just barely use their cell phones to make calls and send the odd text. They do all their banking over the phone.
There are tons of people like this. Apple used to be the absolute market leader at making computing accessible for everyone. At some point they got big enough and powerful enough that they could ignore all that and just let their dev teams do whatever they want. They gave up on trying to make real computing accessible to the masses and just pushed all these users to the iPad.
While Apple's ability to create "simple and brilliant" designs in the past can be attributed to the motivation and talent of their staff, I don't think you can say their current failure to do so implies the opposite. We live in a very different world. Computers are expected to do more and, regardless of how much Apple despises it, computers are expected to interact with other systems.
Just think of Hypercard. Many people here will talk about how great it was, and it was great. Yet the most talented developers and designers in the world couldn't recreate it in a form that is both simple and reflects the needs of the modern world. It would always end up lacking essential features or be burdened by an overabundance of functionality.
What about having 2 buttons vs 1 button makes something "inaccessible"? If anything having 2 buttons makes the options that are in the context of the current application more accessible.
But people are incurious, stupid, and just dull-witted, I guess? So you really want the rest of us to have to suffer with 1 button because the lower end of the bell-curve can't handle 2 buttons for reasons related to stupidity?
People who don't come to use a computer for the first time until middle age or later tend to struggle mightily. I'm not sure why, but they seem to have a strong aversion to experimenting with the system. I face this issue almost every day with my 74-year-old father and his iPhone. He can only do the tasks which I explicitly teach him and anything new (like changing a setting he's never changed before) requires me to show him the steps. The fact that changing a different setting in the Settings app is an almost-identical process never occurs to him. He just asks for help every time.
Back in the Classic Mac OS days he did just fine with a 1-button mouse. He was able to click on and interact with everything and got what he needed out of the computer. A 2-button mouse is just utterly baffling to him. It turns every single thing on the screen into a fork in the road: do I left-click this or right-click it?
My 72 year old mother installs her own video cards and has a mouse with 5 buttons. YMMV, I guess.
The 1 button mouse was not superior in any way, and the people that get confused by a mouse with more than 1 button probably should never look at a web page with more than one thing to click. Did your father ever drive a car? Is he aware there's more than one pedal and lever to use? Did he ever find his way to the windshield wipers or did you just drive around with the rain obstructing the view? I'm genuinely curious how such a person could function in this world in the last 80 years.
The pedals in a car always do the same thing every time you press them, in every car you get into. Mouse buttons are not like this. Different applications use them differently. There may be a sizeable chunk that are reasonably consistent but there are tons of outliers that do all kinds of bizarre stuff like using right click to select or to cancel an operation or to bring up a tool tip. Sometimes it’s left click to select and right click to move.
Plus you never know what’s going to be in a context menu until you right click to open it. Sometimes you move the mouse slightly and right click something else and get a different context menu. For an older person with declining vision this can be very confusing. Fixed menus at the top of the screen are discoverable. You can even search for what you want in the help menu. Context menus are not discoverable.
I’m glad your mother knows how to install video cards. My aunt worked at a TV factory installing boards into the case and soldering all the through-hole components to the board. She’s still pretty baffled by computers but she’s almost 80 years old and rarely needs them.
You have to be willfully ignorant or brain damaged to not understand how a 2-button mouse works after the first time using it. The context menu does different things depending on the context. Someone that can't understand that is seriously in trouble in life. It's not something that should be confusing at all.
And as I said, hopefully your father never visits any websites, because they are all different with information in different places on every website. The world must be an extremely frustrating and hostile place for someone that gets confused by a 2 button mouse. I honestly feel bad for your father if everything you say is true.
I don't think it was about him saying 'one button is optimal for all time always' I think a lot of it was 'this is a whole new paradigm for people who only know the typewriter, let's make it simple for them, down the road when people have adjusted to the simple case, we can make it more elaborate'
Hell at the time when this happened most computer joysticks had only one button, or in the case of one of the popular joysticks at the time, three buttons that were all just wired to one input.
"... keyboard shortcuts ..." is a common response to defend Mac design choices.
The single shared menu is also something that made sense on the original 9" 512x384 Mac screen to save but it really is nonsensical in the days of 32" 6k displays, so much mousing to get up to that menu but of course "... keyboard shortcuts ..." comes the refrain.
The single shared menu bar has one huge advantage over per-window menu bars: infinite mouse target size along the vertical axis. When moving the mouse to a narrow strip menu bar at the top of a window you need to accelerate the mouse towards the target and then decelerate in time to stop on the target without overshoot. With a menu bar at the top of the screen you can skip the decelerate part and just slam the mouse to the top of the screen without worrying about overshoot.
You’re right about giant displays though. The best menu system for those is pie menus [1]. Although I would still dispute the advantages of the second mouse button for activating those. The F1-F12 function keys would be much better since you could have instant access to 12 different pie menus instead of a single one with right click.
[1] https://en.wikipedia.org/wiki/Pie_menu
Apple wanted to advertise you couldn't press the wrong button, which meant they couldn't have more than one button.
In the case the right thing to do would be a mouse with no buttons, wouldn't it? No buttons, no pain I say.
You could still press it at the wrong time. ;)
At the wrong time, for the wrong amount of time, in the wrong place... Lots of ways to go wrong, but they can all be hidden in advertising.
Double-clicking is also undiscoverable. People had to be taught about it. Nothing natural or previously familiar works that way.
To be fair, people also had to be taught how to point with a mouse.
The double-click also wasn't essential, you could perform all actions using the one-click menus. The double-click was introduced as a shortcut. From the Apple Lisa Owner's Guide:
8<––––––––––––––––––––––––––––––––––––––––––––––––
Shortcuts
The File/Print menu contains all of the commands you need for creating, opening, closing, and storing your documents. Because you use these commands so frequently, the Office System includes a simple shortcut for performing these tasks: clicking the mouse button twice.
To tear off a sheet of stationery, click twice rapidly on the stationery pad icon.
To open an icon into a window, click twice rapidly on the icon.
To close an open window, click twice rapidly on the window's title bar icon.
Clicking twice to close a window can either set aside the object or save and put away the object, depending on where the object's shadows are. If there is a shadow on the desktop, clicking twice causes the object to be set aside. If the only shadow is in a folder or on a disk, clicking twice summons a dialog box, which asks you whether you want the object set aside or put away.
8<––––––––––––––––––––––––––––––––––––––––––––––––
Pointing with the mouse is discoverable though. If you start moving it - which odds are you will do sometime (even by accident) you will see the pointer moving and eventually figure it out.
You are very unlikely to discover the double click by accident.
I think it’s a spectrum. Unless the double-click delay is ultra short, I’m pretty sure you would discover it sooner or later. My point was that you couldn’t expect someone to learn how to use a computer without any training or instructions, so if you need that anyway, you can also include less discoverable features in it.
There is a trade-off between feature sets that are useful if you know them and super discoverable feature sets, in the sense that the reason some feature is more efficient can also make it less discoverable.
The original Macintosh came with interactive tutorial software demonstrating the use of the mouse and giving you practice manipulating it. The double-click was part of this instruction.
"I'd ran of the mouse pad" and pointing at the screen are not the anecdotes.
Less familiar users are often also confused whether something requires double-click or not, double-clicking web links for example. What's worse is that is often not immediately noticeable, leading to opening same program multiple times or breaking some submission form.
The Alto used a three-button mouse. It was Charles Irby and Dave Smith who decided it would be two for the Star. It was a bitter debate, but they won.
When it was three, every Alto program had its own set of conventions for them. There was no way that could have been unified for a multi-purpose computer.
All lab GUIs starting from Engelbart’s AUGMENT had 3-button mouses, including the Xerox Alto. The researchers and engineers using the Alto found that the mouse buttons were confusing, even this small group couldn’t remember the combinations in each application, and it lead to many inconsistencies between apps. Each button had a very precise function, and they were as much modal shortcuts than direct manipulation buttons.
Even Xerox tried to reduce it to one button for the Star (their first commercial GUI computer), but from their own published account they couldn’t find a way and the Star shipped with a 2-button mouse.
Remember, at the time most people targeted by Apple (or other computer manufacturers) had never used a computer, and the people who did use a computer never used a mouse! (Except for the PARC researchers and some other researchers, so maybe 2000 people worldwide)
It’s people coming from Xerox to Apple who were the most interested in having a one button mouse! They knew the 3-button mouse would confuse users and was entirely unnecessary for most users (as it is today, the secondary click being reduced to just a convenient shortcut, nothing more)
A previous comment about the 2-button Star and how these buttons were actually used: https://news.ycombinator.com/item?id=31750283
> One button mouse was one of the worst inventions. Mice need at least 3 buttons to be useful.
I think the way the article lays out the invention shows that it was a good idea at the time. Apple sticking to a single button mouse was a bad idea. But it wasn't like Tesler saw three buttons and said "If I remove two, I'm a genius".
I don't know what you do with your third button, but I have never used it. It's not a core interaction in any interface I've seen. On my windows machine it does all sorts of unexpected things from starting a scroll mode, dragging items (very rarely) to closing tabs, to nothing on most things. It's not useful. Now the back and forth buttons are super handy, but I wouldn't say they are essential to a mouse.
It's the epitome of design over function. Destroying UX entirely so the mouse can look a tiny bit more sleek. Nothing more Apple than that I guess.
As other commenters have noted, it was from actual user testing, and from the experience of the Xerox alumni. Definitely the right call in 1983.
Today of course Apple uses trackpads with "zero" buttons (though the trackpad is clickable in various ways) and a lot of non-obvious (though generally well-designed) gestures like two-finger scrolling, pinch-to-zoom/rotate, etc.
I wonder why they went with WYSIWIG in the title despite having WYSIWYG in the article.
Anyway, hard to blame the folks who invented it, since it was early days, but WYSIWYG was a truly terrible idea. It heavily implies the need (although, doesn’t technically demand it) to have user input produce only local changes, so we’ve been cursed with all these office documents with terrible spacing. It also ruins our ability to actually communicate with the computer, or describe things on an abstract level. People just poke their documents around until they get something reasonably sensible looking in their current editor.
Is the text reflowed around the figure or did the user just manually add a bunch of line breaks and then manually paste in the figure (anchored to what?). We’ll out later if somebody changes the font.
Maybe WYSIWIG almost works, actually. What you see is… whatever I got. Except it only works if we have the same version of the same office suite.
WYSIWIG was revolutionary in its time. Even the Alto didn't really use it, although you could turn it on if you were masochistic.
> describe things on an abstract level
That's exactly what ordinary users do NOT want.
WYSIWYG democratised computerised printing and other areas, arguably providing the backbone of the PC revolution.
You're mistaking Microsoft's specific implementation fumbles with an interaction mode that helps billions of people every day. Don't throw the baby out with the bathwater, there are plenty of good WYSIWYG implementations out there.
The only thing more frustrating than sizing and aligning a figure in MS Word is sizing and aligning a figure in LaTeX.
I agree, WYSIWYG is not a good idea, and a one button mouse is not a good idea.
You can have print preview if you want to preview the page layout. This is also faster and more efficient than reflowing the text as it is being typed, anyways.
(I don't know why the spelling is different)
Fascinating.. you are aware of the context of how WYSIWYG came about?
So presumably you don't want to return to that state, where you literally would have no idea what it would look like until several minutes later when it finally came out on the printer?
Can you explain a little bit further what your ideal paradigm is?
> where you literally would have no idea what it would look like until several minutes later when it finally came out on the printer?
That is why print preview is a good idea (which is possible with most modern computers; this can be done independently of WYSIWYG editing). For example, if you write a TeX file and then make the DVI file and use xdvi or another previewer to display it on the computer before you print it on paper.
Reveal Codes would be another possibility, perhaps in combination with a "partial WYSIWYG" editor which does not display reflowing etc in the editor and only in the preview; if you use Reveal Codes then formatting codes are displayed (e.g. bold, italics, etc), but you can also display the bold, italics, etc directly during editing. This can be a in between way, which gives you some of the benefits of WYSIWYG and some of the benefits of non-WYSIWYG.
while writing documentation in Markdown, I almost always have the preview open in split-screen in vscode, not because I need it, but mostly to make sure that the other people reading my work have a good experience.
Ideally people would write mark-up in text editors. If you want to be very nice, I guess it would be OK to have drag-and-drop WYSWYG environments that spit out the markup code, but there should be a very high priority on making sure the code produced is human-readable.
Mark-up is something that ordinary users will never understand. "Good enough" is what they want. Printing it to check that it came out right is just fine with them. It is simply not worth it for the software to try to make everything perfect.
Case in point: my book was edited in Vellum, and it can generate PDF and EPUB. The PDF had a widow (one line at the top of the last page of the chapter), and Vellum just omitted that one-line page. To the reader it seemed like a typo (which it was, in a sense).
I "fixed" it by removing a few words up above it, so everything fit on the last full page. But it was only that I knew about widow/orphan control that I could figure that out. Just imagine how much trouble it would be to make things perfectly WYSIWYG on every printer and every type of document.
It amazes me that something as simple and obvious as cut and paste had to be invented. Even more amazing that we can actually point to the person that did it.
Indeed! I'm writing an app that resizes and moves shapes on a canvas (among other things) and I'm amazed at how many trivial little things I had to write that everyone including me would take for granted, including copying and pasting, drawing the little handles to resize the shape, changing the cursor based on what's below it (the handles or the shapes), drawing a translucent version of the shape when it's being moved/resized, changing the position of the shape when it is resized from _some_ of the handles but not all (top left vs. bottom right)...
just wait until you get to the undo/redo stack! That's one of my favourite parts of a UI to program.
My experience is that undo/redo isn't something you tack on at the end of the project, it has to be baked into the infrastructure. Every action you take has to be delegated to an object that has all the information it needs to both do the action and undo it. Then you can just keep a stack of those objects and call them as necessary.
Fortunately I'm using iced which is an implementation of The Elm Architecture so undo/redo is just a matter of keeping track of which messages were sent and reversing them as needed.
The alternative, as used e.g. by the Xerox Star, was select-and-copy/move. The advantage over cut-and-paste is that you don't have invisible fragile state.
However, the Star implementation had copy and move modes (select source, COPY, mouse to destination, CLICK) and Tesler hated modes. I don't know why Star didn't use the modeless version (select source, mouse to destination, COPY).
The mother of all demos had a chorded keyboard, a mouse,a regular keyboard, and video conferencing.
https://en.wikipedia.org/wiki/The_Mother_of_All_Demos
I love that piano like chorded keyboard (https://en.wikipedia.org/wiki/Chorded_keyboard#/media/File:X...) from the alto. I think it's still and interesting UI concept but I think it should/could be adapted to a foot pedal design, chords would be constrained to 2 inputs though unless maybe two of the inuputs were directly side by side then you could expand to three. Organists know what's up ;)
There was one guy at Xerox (Smokey Wallace, RIP) who loved the keyset and could type like a mad man on it.
Everyone else just put it away and ignored it, until MazeWar came along.
I thought about it a lot and realize the design was swallowed by control key shift combinations.
You're talking about the keyset.
Most Alto users didn't even use theirs until MazeWar came along. Then tech support was flooded with bad keysets that people wanted to start using.
To be able to do copy (edit) paste (edit) paste you need independent storage of what’s copied. Move requires new UI sate where cut is just Copy (Edit:Delete). With Undo cut is safe enough and there’s likely something more critical to work on than a Move UI.
Also, I’ve seen some terrible move UI. It may seem cool to have a big floating blob of text follow the curser but that doesn’t work well when you want to move multiple pages or across multiple pages.
If you want to replace by pasting, you'd lose the first selection by selecting what you want to have replaced. Which means you'd need different selection modes depending on whether you are selecting the source or the target of the copy/move. Furthermore, mainting the original (source) selection while preparing the insertion point (target) of the copy/move is also fraught with some fragility.
> If you want to replace by pasting, you'd lose the first selection by selecting what you want to have replaced.
Yes, that's the tradeoff; you'd have to delete that separately. This would be quite close the X11 primary selection and middle-click paste. I think that works reasonably well on its own, but trying to provide both models as X11 does is a mess.
Does anyone actually use cut-and-paste? Copy-paste-delete is the less scary option, right?
There’s a reason copy usually gets the C shortcuts I think.
I use cut and paste pretty often. Besides giving visual feedback that the operation actually worked, it also makes it easy to move things around, including between files.
Back before 2009, early iPhones didn't have cut/copy and paste. Folks had to figure out a good scheme that worked with touch screens.
Which I found flabbergasting at the time, because it had been a standard feature on PDAs ten years prior. I only bought an iPhone once it gained cut&paste support.
Cut and paste is a sort of obvious miss, but in general, I think Smartphones benefitted from not taking for granted the features of PDAs. There was always something deeply niche about the things.
I think in general we are losing a lot of functionality especially since the phone UIs are slowly creeping into the desktop. Discoverability and consistency are simply horrible compared to how things worked around 2000. I think it's a huge regression.
I can't wait until somebody dusts off the design principles of Windows 95/2000 or Mac System 7 and will sell this as the new UX paradigm.
> I think in general we are losing a lot of functionality especially since the phone UIs are slowly creeping into the desktop. Discoverability and consistency are simply horrible compared to how things worked around 2000. I think it's a huge regression.
Indeed. Remember when every icon had a tooltip that told you what it would do? Remember when it shipped with a book that also told you what each thing did?
I recently used an app that was a unified phone/pc interface and I was pretty sure that somewhere in a list of icons was a thing I wanted, but wasn't sure which. I picked the wrong one and then had to figure out howto undo what I had just done.
Who needs to waste time with manuals when you can just Google what you want to do and watch a teenager deliver a three-minute monologue with 15 seconds of actual (but incorrect) content?
Or the content was correct at one point, but 3 weeks later the button was moved in some random update.
Yeah. Tooltips are a big loss. They would be so easy to implement but nobody seems to care anymore.
Absolutely, tooltips were the best. But how do you do tooltips on a touch interface?
Oh yeah, phone UI infecting desktop design is a real shame as well. These are just different types of devices.
There are more misses. For example I found it surprising that they didn’t include a universal “context menu” equivalent (long press would have been obvious) and a universal menu bar equivalent (like Palm OS did). Stuff like this is why we still have an awfully complex and inconsistent UI landscape on mobile.
I think I lightly disagree. Phones are just not good for complex use-cases. I don’t want a context menu on my phone, the depth of interactions in a browser for example should be… slide the webpage this way, slide it that way, poke a link (or, I guess, to be leave room for what I’m doing now, poke a text box to write in it). Dumbing down the UI was a good idea.
We do have all sorts of inconsistent “context menus” now on mobile. Sometimes after you select something, sometimes as items under the share button, sometime when you actually long press, sometimes a menu appears when you tap an item, sometimes as action items that appear when you slide an item to a side. And even for a single of those variations, different variants with different looks exist, etc. A uniform way to “show me all actions I can perform on this item” would be greatly beneficial.
Interesting.
Actually, this conversation has made me realize I only do this sort of “give me more options” interaction in Safari (long press) and Panic Prompt (double tap). I think I hadn’t noticed the inconsistency because 2 is not very many, and also the Panic Prompt behavior is a sort of nice analogy to the typical Linux terminal behavior.
Still though, only two programs and the inconsistency is immediate, haha.
So what is the issue again? Or all resolved?
>and a universal menu bar equivalent
webOS, the poster child for simple and consistent UI, did all of this.
Much like the Amiga, this OS is always imitated, never copied, even though Android should have thrown out everything after Honeycomb to adopt what it brought to the table.
Even feature phones had copy/paste!
> Folks had to figure out a good scheme that worked with touch screens
In Jonny Ives’ brilliance, the magnified view was done away with for several iOS versions.
This man was Karl Pilkington of technology.
Yeah i think this was a UI issue more than a 'we don't think it's a needed feature' issue. Long press with that sticky popup was just something that hadn't come to yet... and certainly forcetouch tech didn't exist yet.
[flagged]
I’m loosely reminded of that Roger Sterling quote from Mad Men: ”I'll tell you what brilliance in advertising is: 99 cents. Somebody thought of that."
There will be a lot of obvious stuff invented in the future that we aren't thinking about now.
My dad grew up on a farm, and latter regretted not inventing the large round baler that most farmers use - he already knew about small round bales, so the only thing missing was make them larger and then haul them on a tractor instead of lifting by hand as you did the small bales. Despite saying the above for years it never occurred to him to invent the large square baler which the same concept (haul with a tractor), but stack better. Everything was known and so obvious in hindsight.
> Everything was known and so obvious in hindsight.
that's why "non-obvious" is so contentious in patent examinations. How do you KNOW it was obvious at the time?
Often the obvious stuff was invented decades ago, but some old people in power persistently refuse to implement into major products. Like the ability to copy multiple things without overwriting the previous entry. Who would ever need that?!
Winkey+V on windows, for anyone interested. Can be turned off in settings app.
Not before time, this is in windows 11.
I assume the last product manager who vetoed this change for decades finally retired.
Started with Windows 10.
>> something as simple and obvious as cut and paste had to be invented.
Which is was. A few hundred years ago. Cut-and-paste began as a manual process. Arranging material for printed often involved very literal cutting and pasting of text and images. Entire trades (typesetters) were dedicated to the task. A more accurate description of Tesler's contribution was that he was the first to implement the concept in the digital realm. The person who "invented" the delete key did not invent the concept of deleting a character.
yes i think it was pretty obvious to anyone reading this article that Larry Tesler did not invent the physical action of cutting and pasting paper
I'm not so sure. The art of typesetting something like a newspaper page doesn't exist for most people. They see oldschool wooden printing press ... big gap ... then bubblejet printers. I know people who think newspapers were somehow silkscreened. The idea that someone in the mid-20th would glue bits of text to a page, which was then transformed into a metal printing plate, is a process most do not appreciate.
Yeah, in the early aughts I had to explain phototypesetting to some students, they just thought everyone used metal type or it's evolution, the typewriter, until the advent of the computer.
It's a good reminder, that everything had to be invented at some point in time.
Even trivial stuff as boiling tea water ...
I'm not sure if invented is the right term in all cases. Discovered may be more appropriate in some.
Sliced bread was invented in the 20th century
Important to note: mechanically, uniformly and massively available sliced bread.
Of course bread has been served sliced for centuries before
Almost everything you see or use around you was invented at some point.
e.g. the following things were all invented:
- that a human dwelling has space between adjacent dwellings and/or eventually streets (straight streets came even later)
- punctuation and spaces between words (looking at you Ancient Greek)
- what word to use when answering the phone ("ahoy hoy" was one proposed option)
It really is true what Steve Jobs said (apropos given Larry Tesler worked at Apple):
"Everything around you that you call life was made up by people that were no smarter than you and you can change it.|
Demonstrably untrue, since humanity was made up of people who lived happily without those things and never even thought of them. Probably even laughed at them when they first appeared.
My favorite Larry Tesler contribution is Tesler’s Law or the Law of Conservation of Complexity[1].
It answers the question “why does this have to be so complicated?”, which I have found to be useful in countless numbers of UI discussions.
“We need it to do this, that, and this other thing, but in an uncomplicated way.”
Well, it can’t be less complicated than any one of those things then.
https://medium.com/kubo/teslers-law-designing-for-inevitable...
What's the relationship with Ashby's Law of Requisite Variety?
<https://en.wikipedia.org/wiki/Variety_(cybernetics)#Law_of_r...>
There's also Larry Wall's "Waterbed Theory", effectively that complexity cannot be squashed down: it will out. Though I suspect this post-dates Tesler.
<https://nick.groenen.me/people/larry-wall/>
Larry Tesler was a great figure in computing history, but why is this being reposted now?
"Inventor of Cut/Paste" is such a ... limited ... way to describe his accomplishments.
I think it's a fantastic way to describe his accomplishments, it gives context to how early and groundbreaking his work was in a way that even the least tech-savvy can understand. Everyone knows what cut and paste are, no one thinks about the fact that someone had to come up with it.
Limited, and also wrong -- Tesler didn't invent any of those. They existed already by 1973 (supposed date of these inventions at Xerox PARC); e.g. TECO from MIT and E from Yale had functionality for cutting/pasting, replacing strings, etc.
The wording of the article suggests that he came up with the term "cut and paste", rather than the concept:
> In 1969 Tesler volunteered to help create a catalog for the Bay Area’s Mid-Peninsula Free University. He and Jim Warren, founder of the West Coast Computer Faire, did the paste-up for that catalog. Around the same time, Tesler saw a demo of a computer command that allowed you to bring back something that you had deleted. The command was called “Escape P Semicolon” (or something similarly arcane). Several years later, when Tesler was at Xerox PARC writing a white paper about the future of computing, he drew on the memory of those two experiences to predict that you would be able to “cut and paste” within computer documents.
Also, I don't think it particularly counts as an invention as it was heavily used in publishing well before the computer era, and was just shifting to a computing context and re-using the same metaphor. For a long time, text was printed in sections, physically cut up and pasted to a board, and when the entire page was assembled it was photographed to create a negative that was used to print the newspaper.
Just to be clear that I'm not intending to disrespect his work, just arguing the semantic meaning of "invention" with respect to this. His obsession with mode-less user interfaces and user-facing simplicity is far more significant a contribution to society in general (and ironically, cut-and-paste is almost the antithesis of his main philosophy as the once-cut data becomes hidden state - it'd be a better metaphor to highlight the data and physically move it around the document).
> the once-cut data becomes hidden state
It's not completely hidden - you can view it using "Show Clipboard"
The article does mention other things, though it makes sense that they would highlight what he's most-famous for.
Bret Victor - Larry's Principle: https://www.youtube.com/watch?v=PGDrIy1G1gU (~38:08)
'No modes' ...this was a super influential talk for many of us a decade ago.
https://news.ycombinator.com/item?id=3591298
https://news.ycombinator.com/item?id=16315328
https://news.ycombinator.com/item?id=12196513
> So why haven’t you heard of him?
Because you've had your head under a rock? It was headline news when he died (which was after this was published).
> “And the question I remember most was from Steve Jobs. He said, ’You guys are sitting on a gold mine here. Why aren’t you making this a product?’”
Xerox WAS making it into a product (the Star). Of course Larry couldn't tell him about that. It failed, just like the Lisa did.
> As one of Tesler’s first tasks at PARC, he and a co-worker wrote a paper on the future of interactive computing, which for the first time talked about cut-and-paste as a way of moving blocks of text, images, and the like. It also described representing documents and other office objects stored on the computer as tiny images—icons—instead of as a list of names [see photo, ].
The "co-worker" was David Canfield Smith, who was directly involved in the Star, unlike Larry.
https://www.youtube.com/watch?v=Bt_zpqlgN0M. (he IS a little stiff in this)
https://www.youtube.com/watch?v=_OwG_rQ_Hqw
> He even convinced Apple to invest in a newly created company, Advanced RISC Machines Ltd., also in Cambridge, that would produce them.
And that stake was quite possibly crucial in helping Apple survive.
> Plus it's on record that Apple made a total of $1.1 billion out of selling those shares, which represented a profit of 366 times its original investment. That money helped Apple survive, and Jobs decision to cut the Newton — with its ARM processor — was also part of the surgery needed to keep Apple alive.[1]
[1] https://appleinsider.com/articles/23/09/05/apple-arm-have-be...
I've always missed is this X.org behaviour on OSX where you copy just by selecting text and past text by pressing the middle mouse button.
Which makes it impossible to replace a selection by pasting. Also, except for terminals, it typically pastes at the pointer location, so you need precise aim (emacs thankfully lets you customize this, but UI toolkit widgets usually don’t)
> Which makes it impossible to replace a selection by pasting.
In principle this is false with a Plan9-like model of mouse chording. Holding left click over a selection and tapping middle click is a reasonable solution.
But for those cases you just use cmd+v or whatever
Yeah, and I keep pasting the wrong thing because the terminal emulator tries to simulate the X behavior (which is very useful) but doesn't maintain a separate buffer like X does.
Nice features but I still prefer yank and put.
(cough) kill and yank.
Related - with a top comment by alankay:
Larry Tesler Has Died - https://news.ycombinator.com/item?id=22361282 - Feb 2020 (149 comments)
The comments show a lot of confusion about what Tesler invented. Other industries did indeed use cut/copy/paste and older editors had ways to do these functions. But the cursor in these editors normally indicated a character. Larry figured out that if instead the cursor indicated the space between characters and he sometimes had a second such cursor to indicate all characters between them then he could do what previous editors needed various commands with a single operation: replace the selection with what has just been typed and move the cursor to right after that. "paste" would the just the equivalent of retyping a previous selection that had been either "cut" or "copied".
If the two cursors were at the same spot (just a blinking vertical bar) then you are inserting text as you type it. If there was some selected text then you are replacing it and then inserting anything more you type. And so on.
Both at Xerox Parc and at Apple he actually tested his ideas on potential users and often found he guessed wrong about what would work and what wouldn't. He would then try something else.
> Tesler registered a strange combination of sensitivity to people and fascination with math. The best career choice the counselor could suggest was working as an architect or maybe becoming a certified public accountant.
A CPA??
Im glad to see that counselors have always been terrible?
I wrote a remembrance when he died a few years back. The timing of this story (2005) is a little unfortunate, because the genius of the Newton investment only showed itself later, even with its failure: https://www.vice.com/en/article/n7jdgw/larry-tesler-the-inve...
See, Apple invested in ARM because of the Newton, which means they held Newton stock. And on top of the fact that this gave Apple an inside line/competitive advantage with ARM that we’re still seeing today, it also meant that Apple owned ARM stock—and could sell it. When the company was near its nadir in the late 1990s, it nursed itself back to health by selling shares of ARM.
So even Tesler’s biggest failure was a stroke of genius.
I have an old messagepad (110) and ran across it a few months back when I was going through some of my storage boxes. Plopped some AA batteries in it and it booted right up.
Still a great user interface and the handwriting recognition still works great (though it is a little slow).
Very much ahead of it's time, the early palm era was such a massive backslide (aside from size and price).
The Mac effectively had a multi-button mouse. It's just that the mode shift buttons were on the keyboard.
RIP.
The one button mouse was significantly underrated. Using the keyboard keys as modifier keys to the mouse was ergonomically great, and anybody who complains about the mouse seems to never really understand how that system worked.
Why allow the user to do things with one hand when you can force them to use two hands ux is far worse imo
A valid argument only in a world where single-hand chorded keyboards are the only interface for CLIs.
Ctrl C, Ctrl V 1946. Ctrl X 2020.
Ctrl Z
Do’h
fg
Different people's brains work differently, essentially innately. And skilled trained brains work differently than the same brain did green. It doesn't seem that Tesler's work ever reflected these important details about the world.
I expect if tasked, Larry Tesler would have "invented" the one-button game controller: fuuuuuunnnnnn (Joe Biden can't get enough of his!)
What you see is what I get
What they said is what I don't get
“no modes”, i’ve always considered that to be a bit of a mantra worth following. but now i seem to be breaking that rule while learning Vim (normal mode, insert mode, and so on).
yesterday i was test driving a car with eco mode, sport mode..the Larry in me was yelling “no modes”!!!
I also use vim (or neovim). But that doesn't mean that I believe in modes or that neovim/vim is a good editor.
I think there is some kind of psychological thing driving this. Like subconsciously, I came to the conclusion many years ago that "real programmers" use vim or Emacs, and then consciously decided that the default keybindings for Emacs were slightly worse.
So for decades I have been trying to learn just enough vim to get by. But practically every day I miss my PC keys for things like selecting text.
At least three times I have got my keybindings the way I wanted and then after a new install or something just decided to deal with the outdated way that vim does it.
You have to realize the context that vim was invented. There was no WYSIWYG. People were used to things like 'ed' where everything was a command. Just being able to stay in a mode and move around freely on the screen was a big deal. The terminal hardware didn't even have a way to hold a key combination.
Vim modes allow you to keep your hands on the home row most of the time and make a mouse unnecessary for editing. That keeps my hands, wrists and forearms healthy and for that I am grateful. Of course a great programmer is not defined by their tools. What matters is what you create, not how you create it.
The nice thing about a mantra like no modes is that you're right 9/10 times. But I won't go back to non-modal text editing.
RIP to a legend!
WYSIWYG is an acronym for "What You See Is What You Get" that refers to software which allows content to be edited in a form that resembles its appearance when printed or displayed as a finished product, such as a printed document, web page, or slide presentation.
... For anyone else who didn't want to look it up.
Discussed at the time:
Larry Tesler Has Died (gizmodo.com) 1346 points on Feb 19, 2020 | 155 comments
https://news.ycombinator.com/item?id=22361282
Edit: Original URL updated from BBC obit to IEEE post so this is a bit of a non sequitur now.
[dead]
(2020)
Copy and pasting the old news story has to be some ironically fitting way to honor the man.
(2020) and a shit article.
Better -
https://spectrum.ieee.org/of-modes-and-men
He (helped) coin WYSIWYG and browser and user friendly all in the 70's.
BBC wrongly somewhat implies Cut/Paste was 80s, in the post computing invention era.
Ok, we changed to that article from https://www.bbc.com/news/world-us-canada-51567695. Thanks!
[flagged]
What a sad story.
How so?
Larry Tesler: Computer scientist behind cut, copy and paste dies aged 74
Just as others have pointed out that cut-and-paste was a term around a long time before this reference, so too was WYSIWYG. The Dramatics had a top popular song in 1971[0] using the same phrase as its title (albeit spelled slightly differently).
I hope we don't hear next about the computer hero who "invented" the term "desktop", or "folder".
[0]https://www.discogs.com/master/185397-The-Dramatics-Whatcha-...
"Cut and paste" was of course a term used with paper before computers, but arguably the computer version of it is not quite the same, because you have that hidden buffer ("clipboard") and can usually paste the same cut item multiple times. Adapting the physical-world cut-and-paste process to the computer realm can count as an invention.