HN Theater

The best talks and videos of Hacker News.

Hacker News Comments on
Dark Patterns – User Interfaces Designed to Trick People · 639 HN points · 86 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention's video "Dark Patterns – User Interfaces Designed to Trick People".
Watch on [↗] Summary
Dark Patterns are tricks used in websites and apps that make you buy or sign up for things that you didn't mean to. The purpose of this site is to spread awareness and to shame companies that use them.
HN Theater Rankings
  • Ranked #30 all time · view

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Unfortunately these sort of practices are getting very common. This should be submitted to
Thanks for creating this service! Do you also get videos embedded in submitted page? E.g. like this one -
Dec 06, 2016 · 639 points, 311 comments · submitted by alexpoulsen
One pattern that I consider "dark", but don't see in this list, is using loaded options on a dialog box. One that I often see in apps is like:

    Rate our app!  
    <OK> <Not Yet>  
Those really get under my skin because the developer is clearly trying to play a psychological trick on me, but it's so brazen and obvious that it just pisses me off. And bigger companies do it too (e.g. Google).
There's another brilliant app rating pattern (perhaps a dark one). A dialog pops up and asks if I loved their app. A "yes" takes me to the marketplace for rating, but a "no" only allows me to send them feedback. It both positively skews the ratings and collects feedback from disgruntled users.
And you almost need to this because app store reviews are so deadly otherwise. You have no way to respond to reviewers (who've had problems) as you might on TripAdvisor and virtually no one goes out of their way to find an option to send feedback before going nuclear in a review.
The Google Play store, at least, allows developers to reply directly to reviews. Others can see their reply as well.
There are hundreds of apps that enable this behavior and an entire industry built around them. I know because I helped refine the UX of one such app.
Some vendors do this well. Starbucks gives you a list of things to pick from, including parts of the app tat have been recently updated.
Not necessarily dark though... as an app developer I'd actually really want to hear the feedback from the people who dislike my app. As in any business, oftentimes the most disgruntled customer is the one that can provide the most useful feedback.

An even smarter way (in my opinion) of getting 5 star reviews is by showing the dialog only after x hours of use. Users most likely to rate badly will have the app uninstalled before the dialog shows.

The "dark" part of it is if the dialog doesn't explicitly tell me that it will take me to the App store once I click on one of the buttons.

Otherwise I don't mind such dialog boxes.

The dark pattern is that you're biasing your review rating by not taking negative users there.
Which benefits everyone if the problem is a simple thing you can resolve by helping them directly. Support is also part of the experience, and good customer support should factor into the review as well.

I can see how you'd see this as providing a positive bias, but I see it more like getting a chance to see if you can't help the customer out before they give up on your app. It also reminds the customer that there are people on the other end - so even if the issue can't be resolved, and you still get a one-star rating, the level of vitriol seems likely to be reduced - something all too easy to forget when angry-reviewing.

Of course if they just take the feedback and dump it, that's a different story - but again, I would think anyone with that experience would still leave the negative review.

TL;DR - too many downloaders use negative reviews as a combination support request and cudgel. I think this is a reasonable defense against that.

Maybe using a dark pattern here is an effective way to fight fire with fire, but that doesn't make it less of a dark pattern.
Users with negative experience are more inclined to leave a rating than users with a positive experience, which makes the ratings not reflecting the reality of the overall user experience in the first place. (

This technique is a genuine way to encourage sharing positive experiences about the app. In the same time, it offers a chance to the app provider to improve a bad user experience.

Whether it's a dark pattern or not, I think really depends on the motives, are you genuinely trying to make the app better or are you only interested in the people's perception of it.

Here is a great article on the topic which discusses the same issue:

Other big players like Microsoft are doing it too. And that dialog can be worse:

    Do you like our app?

    <Yes, rate it> <Later>
It's also freaking annoying because it interrupts your workflow, the thing for which you ended up installing that stupid app in the first place. It's basically disrespectful of their users' time and needs.
Medium does this too, no option to cancel. You can only click later. Tired of growth growth growth by all means...
That particular phrasing also discourages negative feedback.
I feel you... I do it in my app and I hate doing it, but it's a necessary evil. Way too many people see the comments/ratings as either a support board or a way of punishing bad apps but very rarely they give a good or even just decent review, without being asked.
> I feel you... I do it in my app and I hate doing it, but it's a necessary evil.

If we ever meet, I might ask you if you want to be hit in the face now or later. I hate doing it, but it's a necessary evil.

It's not necessary at all.
You may want to reconsider. I've had much better success with providing "Rate Now", "Rate Later", and "Leave Me Alone!" options. Even great apps are prone to bad reviews if the user is being prompted for the 5th time.

Like many users, I as well have fallen victim for having a temporary fit of rage and leaving a 1 star review with a 1 liner of nonconstructive feedback for an otherwise useful app.

As if to say "Fine, you want it? Here's your god damn review!"

> I feel you... I do it in my app and I hate doing it, but it's a necessary evil.

This is what makes the IT part of my heart die a little inside even more: when even the "good folks" in this industry feel like they have to do obnoxious things just to get ahead or keep up.

Folks, this is why the bigger arguments about competition or arbitration and so on always come back to the same point ("there's no meaningful choice"): Eventually, everyone winds up doing the same thing because it's the only way to simply not lose ground to the others who are doing the obnoxious-overall-but-good-for-just-me-in-the-short-term tricks. If I uninstalled every app that demanded this...well, I'd have very few apps. And that's even worse because it's just obnoxious enough to help the individual app developer but not so obnoxious as to spur people onto meaningful action, yet the annoyance is still present...always grating on nerves...disrupting just a little bit of productivity or happiness...needlessly.

> when even the "good folks" in this industry feel like they have to do obnoxious things just to get ahead or keep up.

That's when you have to start thinking about regulation.

On the case of the Android store, regulation by Google. Their inaction is harming everyone.

Do not delude yourself: The “good folks” in the industry do not do this.
See also: the Windows 10 "upgrade" fiasco. It's as if they took their ethics lessons from the bully in _Calvin and Hobbes_: "Yes means no and no means yes. Do you want me to hit you?"
Tinder is the worst I have seen, they require you to pick between 1-5 stars to get the dialog to go away at all. Guess who always gets one star.
Tinder has another horrible UI trick, not sure if it qualifies as dark pattern or just terrible coding. On Android, they require you to enable the global option "Always allow Wifi scanning" in order to get your location. If this is disabled, they pop up a dialog saying "Enable? Yes | Cancel". Hitting "Cancel" immediately pops the dialog back up again, so you cannot use the app at all until you click "Yes".

edit: Some of us do not want to enable this power-draining, privacy-sucking global option just to use Tinder. An xposed framework module was created to bypass the check, but Tinder has actually begun checking for it and the app doesn't work properly if it is enabled.

Yeah. They want the location always. Fuck that. Why not just take it from the ip address you can detect?
Yeah, that should be grounds for expulsion from the Play Store if you ask me. It's clearly an abuse of the runtime permission system.
This annoys me so much that when an app does this to me I actually go to to the app store to give them 1 star.
IIRC, you can hit the back button on that.
Nope, you can't.
Yes you can, on Android at least.
Wait, if they're requiring you to pick 1-5 stars to get the dialog to go away, it must be a custom dialog built by them, right?

I bet they take your one star and just hide the dialog, but take people who rate it as 5 stars to the "real" app store rating page.

On Android, apps can't self rate like that, all they can do is take you to the store page and let you rate their app. The app can't know what rating you gave.

Does iOS let apps self rate like that? This seems inconceivable to me.

Bottom line: when an app keeps pestering you to rate it, say "Yes fine I'll rate you", they take you to the app store and there, you do nothing (or one-star them). The app should register that you rated them and stop pestering you with that dialog.

I don't think either platform lets that happen. Still, if you know your user is going to rate anything less than 5 stars, why would you still take them to the store to rate? And the amount of people who lie on those dialogs is quite small.
It's absolutely possible on Android. Just show the user a dialog which contains 5 star buttons. If they press the fourth or fifth star, take them to the app page on Google Play, otherwise don't do anything.

It's not allowed in the Google Play TOS, and it's one of the darkest patterns there is, but it's technically possible just as it is on iOS.

I think maybe you're not parsing it--they're asking you to rate the app inside the app, with a dialog box created by and owned by the app, and only pushing you to the app store if you're a 4 or a 5.

Apps on Android totally can and do do this.

And then you rate again in the app store, this time for real?
That is absolutely what happens. So irritated people like me go out of their way to open the app store and review directly.
There's also the brilliant idea that they show a giant "Why?" text box if you rate 1 or 2 stars, so that you type away your frustrations and after sufficiently venting, you do not actually go to the App Store to write that review.
This. I hate this more than I can express in a public forum.

Google is notorious for it, especially when they try to get you to use the YouTube app rather than the web interface.

It's not "maybe later" I want—it's "never ask me again".

There's another one by Google maps where you can't choose to avoid tolls very easily. You also can't tell it to remember that you never want to use toll roads, so it is very easy for you to suddenly find yourself on a toll road.
Oh, God, don't get me started on that one. Northeastern Illinois is toll road central, and I-355 is the most expensive toll road in Illinois (save for the Chicago Skyway). Then there's that short stretch of I-80 that runs concurrently with (toll) I-294, with no reasonable alternate routes. That one isn't ridiculously expensive, but it's still annoying.

The tollways here are especially heinous when you don't use them often enough to justify the hassle of getting an I-Pass, since the cash toll is twice the I-Pass toll.

Google Maps on Android is massively using dark patterns. If you are on the go and open the app, it requests to enable the location service to continue. The choices are: "Enable location service" or "Cancel". It is phrased in a way that it will not run the app if you tap on "Cancel" but it works perfectly if you select "Cancel".

The worse, if you agree, it enables the location service for everything all the time where you have the feeling it was just for the Maps app.

I just discovered that 2 days ago and I must say I was really really angry at Google for this clear dark pattern use.

Thank you. This has been going on for a while (as I recall, a Maps update last year first committed this crime - wish I had an apk backup of the prior version).

I now use HERE WeGo or OsmAnd (on non-GAPPS devices, from f-droid). While the experience is not nearly as cool, I love the fact that I am not participating in uncontrolled monitoring and unexpected battery drainage.

+1 for OSMand. The app keeps getting better and better, kudos to developers! It might not be as "cool" as G Maps (though I'm not so sure about it), but at least I can take my maps with me everywhere I go. Not to mention the fact that it is not sending my position anywhere.
And if you have the location service always enabled this gets creepy
I am prompted that each time I activate the GPS to use OsmAnd (for example).

The worst is that if you accept it once, this setting is saved and there is no obvious way in the UI to change it back (which was possible in the past I think).

To reset it, go in the Applications manager in the settings, choose Google Play Services and reset all its data.

There is no way it is a bug, someone had to think and put this deceptive behaviour as a feature (and someone had to design its deceptive UI, someone had to do unit testing for this deceptive behaviour, someone had to QA it…).

I was once dismissing it for like 2 months before I accidentally accepted it.

And Google is really shitty app publisher in more ways than this. Has anyone ever seen a meaningful changelog in any major app of theirs?

I've turned off Google Play Services' permission to access location services, camera and microphone. Google Maps now basically doesn't work (it runs at ~10fps and the search box is non-interactive). Gmail scolds me every 10 second while composing emails that "this app won't work properly unless you give Google Play Services access to your camera and microphone" (but works fine otherwise.)

Their piping of apps' access to things like location through Google Play Services to force you to give them access makes a mockery of the permissions system. I really don't like the way they've been doing things the past few years.

Edit: Gmail complains about lack of access to camera and microphone, not location services. Fixed.

Wait, Gmail is using location?!

When Google released the new permission system in Android, they blew their one chance to actually make permissions meaningful. The fact that "portscan my network" is one of the Other permissions is testament to how unconcerned with user security and privacy they seem to be as an organization (despite, no doubt, some individual developers who care). I'm pretty close to deciding that my next phone will be dumb and featureless.

Wait sorry - it was camera and microphone that it nags me about. I'll update the other post.

But still, I can't see why Gmail should need access to those devices, and far less why it should harass me so aggressively about it when it works fine without them.

Just conjecture, but I feel like there was a sea-change at Google regarding attitudes towards users' privacy, about the time that adblock became widespread and competition started heating up between them and Facebook for ad revenue. They're behaving far less ethically than they did even five years ago.

Yeah I hate that. Beyond what you said, it also suggests to me, "Not Yet must mean that it will annoy me again and again."

Even if that's not true, I may never visit the page again if my perception is that it's gearing up to annoy me.

I will often see: OK, NOT YET, DON'T SHOW AGAIN. Which I think is fine.

Yeah, and I don't expect that anyone tricked into / accidentally clicking "rate now" is going to leave anything but a zero-star review complaining about how the app tries to trick them and has annoying nag screens. I've written a few of those myself, then chuckled to myself since the developer was so short-sighted. If more people did that, they'd probably lay off with the "rate my app" pop-ups. If I REALLY love an app, I'll go find the developer on twitter and thank them, and write some nice reviews.
There's also the "pre permissions pop up" trick where a dialog that looks almost exactly like the iOS permissions box shows up first, and only THEN does the real one show up if the user "agrees" to accept it.

Remember, as an app developer, if a user denies the iOS permissions dialog in your app, you can't EVER show the dialog again -- the user has to manually leave the app and re-enable the permission in the iOS Settings.

Successive calls to the function to ask permission automatically return false for "denied" so it is in the developer's best interest to try to avoid showing the real iOS permission pop up unless the user somehow indicates that they are going to accept the permission (in this case, in the "pre permission" clone dialog).

There's even GitHub projects and CocoaPods for this:

When I used this "trick" (yes, I'm guilty), only a handful of thousands of users actually accepted the soft-prompt and then successively denied the real iOS hard prompt.

Is it a dark pattern, though? You're respecting the user's wishes to not use that permission (ignoring for the moment that you couldn't use it even if you wanted to).
Yes, permission requests should only show up at most once, per Apple, so it is definitely gray area.

Not to mention, the pre permission dialog is usually deceptively similar to the Apple one.

It's not a grey area, it's a dark pattern, and you're basically intentionally breaking Apple's app policies, and screwing your users over just to get more permissions than you would get if you followed the rules.

Fuck you.

No need to throw shade without knowing specifics.
You already threw shade when you declared yourself "guilty", and you readily admit it is against Apple's policies.
That's simply not true. Moreover, there are some people with pretty strong arguments who do not even THINK this should be considered a dark pattern.

However, it doesn't seem like you are interested in furthering this discussion. Would you have rather I not posted at all?

There are many YC companies listed on this Dark Patterns site, and their founders definitely frequent this form. So far everyone else has been completely mum.

Given that you are on YC's community web site, may I suggest using some tact?

I really don't see a problem with it. Apple's default UI is not great, and it can be highly confusing if the user doesn't know why the app is suddenly asking for access to their location or photos or whatever. By explaining and asking first, you smooth that over. There's no deception or trickery, it's just a matter of not surprising users.

(And to be clear, I've never built one of these extra permission prompts, so I'm saying this purely from the perspective of a user.)

I've often seen this used to explain what they're planning to do with the permission, e.g. "Would you like to enable camera access so you can customize avatars with photos?"

Done properly I can appreciate it, especially on Android where permissions are often nonsensically bundled together. Although I'm never sure when it's crossed the line into scam. Is the permission for getting the user's gamer id really "make phone calls"?!

The correct thing to do here is to hit "OK" and just kill the App Store App without leaving a review (since they can't tell if you actually review it). Or maybe leave a 1 star and a note that you were coerced into rating it!
1 star

App developer insisted upon a review, so they got one.

Related: there's an unfortunate cause of excessive app rating prompts — what's shown in search results/top charts/other areas (on iOS) is the average of the latest version, not the overall aggregate.

If you're at a company that does scheduled releases (e.g. once every three weeks), you'll need to continually ask people to review the app to keep that rating high.

Otherwise you only get ratings from new users and users that are discontent with the particular version. It's rare that people who have already rated the app 5-stars will continually go out of their way to rate the app 5-stars again without prompting.

My favorite one of those was Android 6's "share my location and the wifi ssids of everyone around me with google button" on a Nexus 5.

It had a [don't ask me again] checkbox that when set greyed out the option to disagree.

It's a case of bad UI rather than a dark pattern, if you click 'disagree' it always remembers the decision, the checkbox only affects the 'agree' button.
Even worse: the ones that go "Do you like the app/ Are you happy?" Either option will leave the app. No opens up your email with the to filled to their support and yes opens the app store.
We need to start rating apps that do this as 1 star, the problem will solve itself.
<Yes, send me offers!> <No thanks, I like to waste money and wallow in ignorance>
Why is that a "dark pattern"? What trick is it trying to pull on you?
It presents the user a false dichotomy.
I just click ok and then don't actually do the rating.
There's a tumblr for that! They call it "confirm shaming".

And yeah, it's UX cancer.

And even Apple is doing it. You you want to update your iphone now or tonight?
I find it insidious how they present the lock screen at that point -- I clicked to dismiss, but they're asking for authorization still.
Right, the whole "enter your PIN to update tonight" screen. I just gave into the nags and updated my old iPod Touch (which I only use as a remote for some home automation things), and now it runs considerably slower.
Yeah, I gave in on my iPhone and now it runs really hot. But this time around they're being so annoying about it that having an OS that clearly wasn't designed with my hardware in mind is probably the preferable option.

Long story short, I'm pretty sure this phone is my last iPhone.

Software updates are the rare case where I can forgive it. It's annoying to have clients actively resisting updates, especially security updates.
If it was only about the security update. But in the case of iOS it is not. It's is the opportunity to start the nagging all over again. Please start using iCloud. Let's use Apple Pay now. Let's use Apple Music.

On every single minor "security" update...

Even worse; let's break backwards compatibility with your paid software so that you have to purchase an upgrade license... If you've had a Mac long enough and you run paid software you know what I'm talking about. :(
Considering the wide availability of beta OS updates, I'd say that's on the developers of said software. Sure, as someone who works in audio and has to deal with a range of software instruments/effects and audio interface drivers, I know exactly what you're talking about and I have little understanding for developers expensive software/hardware that don't test and fix their products before OS releases.
I certainly disagree on this point. Take the case of a small software company that ends up not being all that successful. I purchase a license for said software; it is useful for me, but the company goes out of business since they are unprofitable.

Now Apple breaks backwards compatibility with this software when I upgrade.

Even in the case of the software maker still being in business, should they have to provide me free upgrades for life? If they do that then they have diminishing profits with every upgrade. On the flip side, should I have to pay for a new license when I don't need the new features and am perfectly happy staying on the existing version if it only would continue to work?

To give two quick examples of this I used to run Parallels on my Mac so I could run a couple of Windows apps that I absolutely had to have and it would have been very inconvenient to use a boot camp setup and reboot every time I needed access. Then when the Mac version changed Parallels quit working; my only option was to buy a new license for Parallels. To make matters worse; you never really know what software will break when there is an upgrade, but if you don't upgrade then you end up in the boat that some other piece of software you are running does get updated, but you can't use that software without getting the latest version of Mac.

When I compare this to Windows; I'm still running today on Windows 10 software that I purchased for Windows 95. For all of the many things I dislike about Windows; the one thing I applaud them for is their level of backwards compatibility.

Considering the wide availability of beta OS updates, I'd say that's on the developers of said software.

Why? If the developers produced a product that works and sold it to a user at a fair price, why on earth should those developers then have some sort of indefinite responsibility for producing updates because other people chose to break compatibility?

There is a reason we have standards and there is a reason it's important for system software in particular to define and support standard interfaces. In fact, that is arguably the primary function of an operating system: to provide a stable platform on which other software can run.

The fact that some of the main OS providers no longer seem to recognise this and instead consider instability and backward-incompatibility to be strategic assets makes me genuinely afraid for the near future of our industry. If I want to do something as simple as buying a laptop for one of my small businesses tomorrow, so someone can get on with useful work and will be able to continue doing so for a long time, there are currently no good options available.

Or how about popping a dialog to inform me that cellular data is disabled for $app every time I open it while not connected to wifi, even though $app most certainly does not require access to data in order to work.

I'm of the "avoid popups at all cost" school of UI design. As soon as you let developers have modal UI that isn't meant to accomplish a specific, user-initiated task, they will immediately start using that power to be obnoxious.

Usually $app actually requires access to internet for stuff you are unaware of.
But Apple doesn't just push security updates. They have a well established track record of pushing entire new OSes the same way, a very poor record of crippling performance or outright bricking older ("old" is hardly a fair description) devices in the process, and a stubborn insistence that applying these updates is a one-way process that may never be reverted regardless of any harm they do. Any business that adopts such customer-hostile policies deserves every bit of resistance they bring upon themselves, and I have not the slightest sympathy for them.
Hm, iOS 10 seemed more like a feature release than a security one. I've finally stopped getting update notifications after I rolled back my phone.

Update: Still getting nuisance notifications.

I can't, since the "tonight" option never works and "right now" I'm interacting with my phone for a specific purpose that (like navigation) that won't wait half an hour or three quarters of a charge for an OS upgrade.
Security update? 9.3.4 => 9.3.5? That's an update

9.3.5 => 10.0 is not a "security update".

Constant nagware until you update is asinine. "Update Now or Later?"... f* you, I don't like what I see in 10... but I'm stuck with dancing around daily fucking warnings.

I swear... I'm not bitter :) I also won't buy another iPhone.

Workaround: the iOS10 update requires 1GB of space. The nag only appears if the update has been auto-downloaded. Use "Manage Storage" to delete the 1GB update file for iOS10. THen load enough music/video on your iPhone so there is less than 1GB space. Presto, no more iOS10 download (won't fit) and no more nagging.
My problem with that is the dance you have to do after words...

* It won't download over cell towers * It won't download if less than 1GB available on phone

Now, I've got to balance my space to within 1gb and minimize time outside of that while on Wifi.

Swap out audio books? Take too many pictures/videos? Podcast downloads? Game/App downloads?

I know this option exists... and it's just as crappy - if not more so - than clicking "No" every day.

Quite nice. Might do this for the test devices we've got at work to prevent accidental updates.
Gross that you have to go to these lengths, but upvoted for having a solution.
My solution was to give the iPad to my kid and get an Android.
Not sure if this was intended but this comment is actually kind of funny considering Android's lack of updates is a frequent point of criticism. Update notifications are probably the one thing you don't have to worry about with Android.

On a more serious note, in my opinion updates are an exemption and the user should be urged to do them.

I just got an update yesterday. (OnePlus 3)
I don't necessarily want to AVOID an update. I don't update to new OS's.

XP sucked until SP2. 8 sucked until 8.1u1. 10 gets better each roll.

My issue with iOS 10 is the changes to the underlying flow (like unlocking the phone) and the lack of tweakability (basic issue with iOS). Naggware on top of that grates my nerves.

And you will never get an OS update again. Android products seem to be orphaned at whatever OS is current at the time of production.

Just bought a new Android table and it had lollipop. Turns out you cannot get the most recent OS for it so I am stuck with a 2 year old OS.

My Note 4 came with kitkat installed two years ago. Shortly thereafter I was able to upgrade to lollipop. For the past several months now I've been ignoring the available upgrade to marshmallow because honestly neither my cell provider nor Google has any incentive to let me keep a perfectly good phone for another year or two, so I'm worried a major performance hit is the only real feature I can expect from the new version. I know I'm being slightly irrational in this concern, but can anyone confirm that the jump from Android 5 to 6 doesn't cripple phones with "old" specs (since it's targeted to the latest phones)? I'd love to update if it turns out this is not the case.
5 to 6 was fine on my G4. I'm not sure how it compares to the Note 4 but the phone is at least a generation old. There is another pending OS update I've been ignoring for a few months, not for fear of being crippled, but just because I can.
Ditto 4.4
First of all there's a big difference in how Android gets upgraded. Google doesn't need to upgrade the base operating system in order to push changes to its apps and they can even push security fixes and upgrades to the base operating system by means of Play Services and supporting libraries by which they push backports.

It also depends on the Android device. I have a Nexus 6, released 2014, that's on the latest Android 7 (Nougat) and my old Nexus 4, released 2012, is on Android 5 (Lollipop).

Maybe I would have preferred if my Nexus 4 also got upgraded to the latest, but then again it might not have the needed juice, I like very much how it runs right now and all the apps I use still work and receive updates. And I also have an older iPad that got upgraded by Apple and is now unusable.

Even so, with Apple there's no way out - once upgraded, it stays upgraded and once support is dropped, it stays droppped. With my Nexus I have a choice - because of Android's nature I can always use CyanogenMod. It's not exactly a solution for the non-technical-savvy, but it works.

The crying bear got me laughing. Did someone really pull such a low blow as this?!
scrolls down for the crying bear

Holy sh*t! That's really low.

The only one where I can give an honest answer is "No, I love my laptop too much to upgrade." I'm actually having exactly this problem. I would like to upgrade to a new notebook with more RAM (4 GB soldered in is becoming a bit claustrophobic these days), but I have several rare autographs on my notebook cover and don't want to abandon it.

I don't get how this practice gets to live on anything other than uninstall screens. If I'm presented with an option like that, my first response is to close the window (whether app or webpage), then uninstall everything from that vendor.

Heck, I even uninstalled every jwz package from my Debian systems after the xscreensaver fiasco.

How's that shaming?

It's not sayin "Do it later, I'm a lazy bum."

It's not directly shaming, but it presents the user with three options.

1. Comply with whatever they want you to do

2. Lie (say that you are going to comply later without intending to)

3. Walk away from the product (perhaps forever).

This one might be the best: (Note the content of the article, then move your mouse up to try to go to close the tab. Irony!)
That has /got/ to be deliberate.
what is supposed to happen ? I moved the pointer and closed the tab without anything happening.
A popup which shows this dark pattern. Try it again in a private/incognito tab since it'll only do it the first time. Or don't bother: it's not exactly super exciting.
Honestly one that only appears when you go to close the tab is a lot less obtrusive. And besides that I don't think being in-your-face really makes something fall into "dark patterns."
Did you read the content yourself? They are saying that the "show before close" is a good thing, so what's the irony?

Read the "Step Into The Light" section.

It's not the pop-up that's ironic per se, it's the phrasing of the "no" button.
Great video and great website.

My only nitpick: the author wants the industry to agree on a "code of ethics."

Unfortunately, such exhortations strike me as naive. They are unlikely to work, because the truly bad actors will continue to use dark patterns regardless, putting pressure on all other actors to follow suit. The key challenge is not in getting the good actors to do the right thing, but in preventing the bad actors from doing the wrong thing.

Meanwhile, even sophisticated consumers like HN members pay a cognitive or financial cost to deal with dark patterns every day, which are prevalent throughout the web. Everyone I know is sick and tired of this crap.

The only viable solution I can think of is regulation in the form of a consumer-protection agency, working with the industry, that can fine bad actors up the wazoo.

Does anyone here have a better suggestion?

What about certifications? Some certifying org can (for a fee) routinely audit websites for whether they use dark patterns. Then, people just know to avoid sites that don't have a good certification, and can report shady stuff.

If they push the envelope too hard, you report, they follow up and can potentially pull the cert. Maybe have browser integration too. (But good luck disambiguating between this and SSL certs for the average person...)

It's a tough situation.

If I'm a developer (say, a junior engineer with my first real entry-level software engineering job out of college), my direct manager (who generally supervises and stringently handholds) basically tells me exactly which features need to be implemented (and often, even how).

I don't have much of a say in which patterns (dark or light) get implemented, and I probably won't have the gull to "stand up" and "rock the boat" as a 22 year old fresh out of school.

It's even worse if I'm married with kids... How do I explain to my wife and children why I lost my job for refusing to implement the product guys' ill-conceived version of "Roach Motel" in the frontend?

This is why I sympathize with the VW lowly engineer "fall guy" whose head met the chopping block for the entire pervasive executive diesel cheat scandal.

A code of ethics is a perfectly acceptable solution actually. Several other institutions have codes of ethics that must be followed and there have been legal adoptions because of them. For example the Hippocratic Oath. Many times doctors have been asked to do harm to patients. Of course it does happen, BUT if a doctor refuses and gets punished in some way for that then a major lawsuit can ensue. Many engineers have codes of conduct as well.

I think this would be much harder to pass, and would be more akin to financial advisers being required to be fiduciaries, but hey, that is now required in the US. What is required for that to happen is mainline support though. But that will be difficult to get. I think this is, like the privacy stuff, is a little too abstract for the average person.

Blockchain based application platforms can allow re-architecting of applications such that the data that backs the application is not "owned" by the company, but instead is either public or user owned.

Currently, if you want to use LinkedIn you have to either use their website. Sometimes there are 3rd party options that consume the API but in the current model of the web, as soon as one of those 3rd parties is seen as problematic then API access is typically revoked or restricted to give power back to the company.

In the public data model the data and API are publicly exposed and cannot be arbitrarily restricted. In the LinkedIn case this would allow a 3rd party to build a new UX on top of the LinkedIn database that excludes the copious dark patterns. Under this model, companies who abuse their users risk getting displaced by an alternate application backed by the same data that favors the user.

- Disclaimer: I work pretty much exclusively developing software in the Ethereum ecosystem which is one such blockchain based platform.

Sure, blockchain-based applications might have that ability... but what incentive do companies like LinkedIn have for doing this? What incentive does any company that uses these dark patterns constantly have to use a blockchain-based application? Especially because most of the time, the users data is the profit machine, so companies certainly want to own the data that backs the application.
> but what incentive do companies like LinkedIn have for doing this?

They don't have any incentive and I suspect they will hold onto that data until it's "pried from their cold dead hands".

I think that companies like LinkedIn and other massive data silos are going to atrophy and die as users migrate to new platforms that treat them better and give them more control over their data and experience. I'd like to point out that while there is little incentive for current companies to adopt this architecture, it doesn't mean that new companies won't be successful implementing their business under this architecture. Admittedly, almost all of this is utterly unproven given the newness of blockchain based application platforms.

One way to look at this is that the current model of internet companies is highly anti-competitive. The data they "own" is really the data of all of their users who can freely give it to any other source they choose. The fact that they have control over the database is what gives them the competitive advantage. These new application platforms which have open public databases can change the game such that the previous closed-data model can no longer compete.

I think at this point we've amply demonstrated that the overwhelming majority of people--and I think that's probably a critical mass of people, from a social sense--don't really care about their data and only minimally care about experience.

What do you view is the meaningful reason for users to switch to these other platforms with some kind of better underlying data model? In addition, what's the meaningful reason for a company to adopt such a better underlying data model instead of keeping a data silo and just making better features on top of such a silo?

> What do you view is the meaningful reason for users to switch to these other platforms with some kind of better underlying data model?

I completely agree that people don't currently care about their data in the sense that people are complacent about their privacy and aren't likely to change very much in that regard.

I think people care about UX but to what level?? Might be minimal.

There are a few compelling reasons why I think these new open platforms are likely to succeed and I'll try to capture them succinctly.

1. Data Economy: People choose options that save them money or make them more money. While people don't care about owning their own data, they will care about a new platform that lets them earn money for passive things like keeping their smart phone location services turned on, or allowing access to their browsing habits.

2. Account Portability: Currently if you transition from selling on Ebay to Amazon, or from driving for Uber to Lyft you have to start back over from zero. If you own your data then you just bring it with you over to the new platform and all of your reputation and whatnot can come with you.

3. Network Effect: These types of open platforms are capable of robust cross-platform integration. Right now we see the power of this in things like the suite of products that Google provides. We can have these types deep inter-connectivity without needing the applications to be from the same source.

I also want to acknowledge that this isn't going to be a smooth ride and there are big challenges to overcome, but the potential exists and it won't happen if we don't try.

> what's the meaningful reason for a company to adopt such a better underlying data model instead of keeping a data silo and just making better features on top of such a silo?

I believe that the article linked below titled "The Golden Age of Open Protocols" is the most compelling argument I've seen.


What about an organization that helps consumers to cancel or opt out of deals instantly. From the talk if a dark pattern gets a consumer to accept a deal, it is very difficult to leave, so going to help for this could work especially if the help can cancel hassle free very quickly.
> The only viable solution I can think of is regulation in the form of a consumer-protection agency, working with the industry, that can fine bad actors up the wazoo.

A code of ethics would be 100x more effective than this, and still be ineffective.

I agree and that goal sounds similar to the goal of Kill Analytics [1]. "The industry" doesn't even exist as an entity. Gov regulation seems like the only viable way to fix it. But on the other hand, Google AdSense has done a lot to improve advertising online. Popups and bad ads used to be waaay worse than they are now (not that they are okay now). So maybe we do just need a few big players to step up and change things.


For advertising, there is an "industry" that can agree on such things, and it's called the Interactive Advertising Bureau (IAB). In fact, they're now acceptingpublic comments until December 22 on the new ad types that are supposed to be "leaner":

Thats a good point for advertising. Though iab doesn't touch much for onsite ux outside of display ads. In my limited experience iab perpetuates/supports what most consumers would see as deceitful. Not CRO, but excessive data collection and sharing.

(I'm iab ad ops certified)

Does anyone know how this is different from/compares to Privacy Badger[1]? It is also a chrome[2] and firefox extension[3].




Privacy Badger is more on the defensive side: It allows you to avoid tracking, so you just disappear from the POV of the analytics platform.

Kill Analytics looks more offensive. It actively sends garbage data to the analytics platform, thus destroying its value proposition to the site operator and discouraging its continued use.

> Gov regulation seems like the only viable way to fix it.

So we can have more "This website uses cookies, okay?!?!?" banners popping up all over the place? No thanks.

Besides, do you really want a bureaucrat telling you how to design your web site, with penalties enforceable by law?

Yeah those popups suck. I think there are better ways to implement things like that. Maybe a button in a web browser that reports sites to a 3rd party that assesses sites and can hand info over to the government for levying fines.
>Besides, do you really want a bureaucrat telling you how to design your web site, with penalties enforceable by law?

"Bureaucrats" already tell you how to design your website. There is already laws against some forms of deceitful advertising, unfair trade practices, and information sharing and the like. For example, let's say I was working to design an airline ticket booking site...

>For both domestic and international markets, carriers must provide disclosure of the full price to be paid, including government taxes/fees as well as carrier surcharges, in their advertising, on their websites and on the passenger’s e-ticket confirmation. In addition, carriers must disclose all fees for optional services through a prominent link on their homepage, and must include information on e-ticket confirmations about the free baggage allowance and applicable fees for the first and second checked bag and carry-on.

How about those hotel "resort fees?" Currently they FTC says they are OK as long as they are disclosed before booking. So a hotel can advertise $20/night but when you go to book say "lol btw there's also a $30 resort fee." This of course makes comparison shopping impossible and exists for no other reason to deceive you. Rumor has it the FTC is going to backtrack on the policy and disallow separate resort fees.

Those aren't web site design issues, they're truth-in-advertising and business practice issues, which apply to all forms of advertising and business.

I'm talking about technical and design issues, like requiring every web site in the EU to pop up a stupid banner while you're trying to read something that blocks the content just to say, "Hey! We use cookies! Got it?" Now imagine taking that to the next level, with pages of regulations saying where and how other form elements must be laid out on the screen. Imagine another banner popping up on every EU web site saying, "Hey! Here's the link to our privacy policy! Got it?" Then clicking that away stores a cookie, which pops up the cookie banner... All because some bureaucrat who doesn't even know what an HTTP cookie is wrote a regulation requiring everyone on the whole continent to acquiesce to the bureaucrat's ignorance so he can claim to be pro-consumer and privacy-conscious and get reelected.

You want more of that?

> I'm talking about technical and design issues, like requiring every web site in the EU to pop up a stupid banner while you're trying to read something that blocks the content just to say, "Hey! We use cookies! Got it?"

When I worried about the impact of the the EU cookie directive I read it. Surprisingly, it only requires Cookie notifications for web sites that use cookies for purposes that are not strictly necessary for the web site to function. This means that the operators of web pages that show cookie notifications are probably spying on their users for advertising (or other) purposes. The EU cookie directive only makes this obvious.

I think EU politicians know what cookies are and how they are used. You can see that in the list of cookies exempt from consent:

• session IDs

• authentication cookies

• user-centric security cookies

• session-limited multimedia player cookies

• social network cookies (for logged in members of the social network)


Thanks for all that info. It's really nice to have both source material and a reasonable summary.
> So maybe we do just need a few big players to step up and change things.

The question again is, how to make those big players do something that will lose them money.

Do it in a way that earns them more money. Sounds difficult but that is what adsense did. If legit ux practices earn more, companies will use them.

Potential scenario: someone makes a browser plugin that blocks dark patterns as if they were ads, so companies who use them don't see any traction with them.

Or 50 years out when everyone is computer literate, users are aware of dark patterns and punish companies who use them by not buying their products.

AdSense also had the massive data Google had on users to entice advertisers with. What would entice Google this time?
Conversion rate optimization tools and data. Businesses turn to dark patterns to increase the conversion rate, and there isn't a ubiquitous tool for them to use. In theory google could provide a lot better funnel analytics and offer suggestions/options to improve that aren't dark patterns.

Its a stretch, but hey..

There's no search but I'm curious if LinkedIn is included. I never took screenshots, unfortunately, but I feel like they've had close to 5 in my own experience alone.
Here's the one of the most egregious examples of LinkedIn's dark UX. The transition from "I'm accepting incoming invites" to "I'm inviting people to connect or join LinkedIn" is intentionally subtle:
That link itself is a dark pattern. It tries to get you to sign into Google; you can't see the photos otherwise.
Not sure what you're seeing. I opened the link in an incognito window (therefore, no google signin), and everything works fine without signing in:

Also tried a different browser with a clean slate, no issues.

Maybe the link was edited?

works fine for me in incognito tab
If it were truly the case that you couldn't see it without signing in, that would not be a dark pattern - there's nothing misleading about that, it's just a policy you don't like. using the buzzword of the moment to criticize something you don't like, without considering whether it really applies, is how phrases lose all their meaning.

(of course, that's not true here and you don't have to sign in)

The distinction is subtle, because there is a common dark pattern where websites try to trick you into creating an account to access public content, even though you don't actually need to.

I'm thinking of Dropbox, which does this when you try to access a link that someone shared with you by email

This is a dark pattern because it's trying to make me believe that I should create an account when obviously I just want to get the file that my friend sent me. The link to download the file directly is at the bottom of the message.

Here's another one I found recently: The email spam they send you is categorised into a bunch of categories. You can subscribe or unsubscribe to each individually. The 'unsubscribe' link at the bottom of the emails sent will unsubscribe you from the first category only, no matter which category the email is from! And the web site doesn't tell you this, it just pops up an almost content-free 'unsubscribed' message. So you keep getting spam until you manually load the site and dig through the settings pages and uncheck the other categories.
I was going to say, you could read this site, or you could just go to LinkedIn. It's an case study in obnoxious dark patterns.
But you learn more this way, linked in is just frustrating!

A company that tries to be clear on everything is Google, but I still find they morph so rapidly that their documentation is often 2 or 3 generations behind.

Previous discussion on LinkedIn dark patterns:

And the linked article to it:

I've seen those as an HN lurker. I just like the idea of a central repository like this one.
LinkedIn is one of the most common examples when talking about dark patterns, their most famous being the number of ways they tried to get you to invite your gmail contacts.
Just reinstalled Skype and can see that they're trying to get access to my contacts also. They tell you to set up access so you can use the app.

First up is microphone, and you click allow. Second is camera, and you click allow. Third is contacts, and you click—wait a minute, why do you need this? Disallow.

Don't know what they would do if they got my contacts (hopefully not spam them like LinkedIn), and don't intend to find out.

Well considering that Skype is mainly a way to, you know, contact people it doesn't seem actually insane to consider it.
Skype should be a little more private than your cell phone number is. If I add someone's phone number to my contacts list, that DOES NOT mean I want them automatically added as my Skype contact.
Of course. But for them to ask doesn't seem like a dark pattern, since the main use case for the app is contacting people you know and it's reasonable to think most users don't want to micromanage different contact lists as you describe. The dark pattern stuff is more like when your app for rating ice cream flavors is asking for your entire address book.
I don't recall skype ever asking me for my permission to do this. I updated the app and then suddenly all my phone contacts were skype contacts and I had to go through and change the settings to never do that again and I had to manually delete each contact it had created.

To me this would be akin to facebook automatically adding the local chinese restaurant as my friend simply because I had their number saved in my phone.

Like many people, Skype is a fallback communication channel for me. I don't care if my brother or best friend is on Skype because I call them on the phone to talk, text message if I want to chat, or Facetime if I want to video chat.

I'd say Skype is a way to communicate with people, not a way to contact people. And for me, it's a method of last resort for people I can't call/text/Facetime/Hangout. I understand some people might say yes, but it's still somewhat deceptive to put this in the same category as "enable mic" and "enable camera", which Skype cannot operate without.

Airbnb also wants access to your entire Google contacts or Facebook friends list if you want to use the mobile app. Let's just say that I refuse to use their mobile app and only use the desktop/browser version for this reason.

You want permission to access my entire contacts/social network? No thanks.

> You want permission to access my entire contacts/social network? No thanks.

Too bad dozens of your contacts did not care about it and they have your data anyway.

I'm just realizing why the manager of the local Grease Monkey once endorsed me for "parallel algorithms".
Another dark pattern (used to gain more positive ratings in apps) is:

    Do you love our app?

      Yes         No
       |          |
       |          |
     ______     ______
     Opens      Does
     AppStore   Nothing
It's a bit like saying, "Do you love candidate X?", and then giving instructions for voting only to those who answer "yes".
At the same time, if someone has indicated that they want to give my app a bad review, am I obligated to take them to the review point so they can do it?
I don't think that any of these dark patterns represent the breaking of obligations by any party.

Dark patterns don't represent anything truly sinister, and in most cases they are perfectly legal. They are just bad UX because they're dishonest about their intent.

Buying tickets with RyanAir is stressful due to these kind of practices. They're less aggressive than in the past, when I wouldn't even continue because I just had zero trust in the company, but they're still sly.

A sneaky one I saw recently is something like:

[ ] Subscribe to newsletter about our services by unchecking this box.

(It doesn't matter whether the box is initially checked or not, the user will be tricked into the desired behavior.)

I don't remember the exact phrasing, and it was much more shrewd than my own, but it relied on a boolean flipping of the value of the checkbox towards the end of the field label. Any user seeding the start of the sentence will leave it in its current state.

Why bother with a checkbox at all at this point? Can't they send everyone the newsletter and not bother with a checkbox?
Probably their way of getting around government regulations.
Tell me about it.

A few years ago RyanAir's website was much worse. It's actually usable now!

I was somehow able to buy a freaking bag with my ticket. I'm surprised they didn't somehow trick me to buy their lottery too. And I'm tech savvy...

Would gofundme's entire brand and business model be considered a dark pattern? They go out of their way to make it feel like a nonprofit, promote campaigns for issues that already have actual nonprofit status/direct donation pages, and do their best to hide their fees (which, last I checked, were actually higher for legitimate nonprofits than for regular campaigns).
All of the fundraising platforms do this. They charge a fee and some go as far as "recommending" a donation amount to the platform.
I'd like to see them make their platform fees clear at check out instead of hiding them in a FAQ or About Us page.
This kind of "hall of shame" websites are interesting to me. However, most of the regular users do not know nor care about this stuff.

Well, in a semi-ideal world, there would be a comprehensive "hall of shame" database containing the information about the tricks, problems, dark patterns, etc. for all websites. Then, some helper apps or browser extensions could warn us about these issues while a regular user is browsing.

One of the problems with this idea is that it gives a huge authority to the owner of that database and there would be lots of questions about its neutrality.

One concrete thing I think can be done right now is to make a Chrome extension that audits the current website on the number of dark patterns and visibly surfaces a score for the website so people can flee from them like the plague/websites-with-the-broken-ssl-padlocks. Vote with our wallet.
In an ideal world we could detect those patterns programmatically but as you hinted, there is the question of "when is it considered a dark pattern and when is it only clever marketing?".
The answer is actually clear - it's always "dark pattern" from the POV of the user, and it's always "clever marketing" from the POV of the business applying it. Each side draws the line so far in the direction of the other that there isn't any easy compromise.
Is there a category for grouping notifications such that spammy notifications are lumped in with other important ones you might want to receive?

Google Photos is a big culprit unfortunately with their photo backup. They keep pinging a notification to get me to remove local versions that are backed up in the cloud. I don't want to do that. The only way to remove the notification seems to be disabling all app notifications.

Worse, when you go into settings, they have a variety of settings that all take you into a deeper level of settings when you click them.

Except "Free up device storage."

Clicking that does not take you to a deeper level as expected (despite looking like a nav tree item), but instead actually does the one thing I didn't want to do, with no confirmation dialogue.

Do you even actually have "important" notifications you might want to receive?

Is any automated system more important than your focus?

What I have is, disallow or delete the app or "service" as soon as I receive a "notification" from it, only my wife and family is allowed to light up the led on my phone.

An oft-overlooked aspect of dark patterns is the impact on accessibility.

Ever received a spam email, hunted for the unsubscribe link, and found it in light grey, against a white background? Imagine how much worse that is for someone with low vision. Ditto for pop-up ads with a tiny grey X in the corner.

Many of the dark patterns described in the video rely on hiding/obfuscating opt-outs and these have an even bigger impact on people with visual/processing disabilities.

How about this one in the new uber app: If you disable location services (which recently switched to either "always" or "never", no longer offering "only when open") you can't use your history or saved favorite places to set a location, you have to manually type it out. Couldn't believe they would be so shady just to get location services activated.
After the first time this appeared on HN, I quit LinkedIn and deleted my profile. They still sent me "xxx wants to connect with you".

I'm really getting tired of turning down Amazon Prime on Amazon. I use Amazon less because of this. There are about three extra pages of Amazon Prime ads to click through for every purchase.

Somewhat related—Amazon Prime Video now shows ads at the beginning of videos. This wasn't part of the deal when we signed up, and it's pretty deceptive to just start doing this to customers.

I didn't see it in the agreement (actually went back and looked for something that would cover ads), and it's not clear what limits, if any, they think there are. That is, could they just decide to show as many ads as Hulu and say "yeah, we said you could have access to this catalog. we didn't say it would be ad-free".

I opted for the ad-free Hulu subscription, and that, for the most part, is great.

They do have a few programs on there that are not eligible, but then they say up front "due to streaming rights, we have to show you ads, but it's just one before and one after".

That, at least is better than the ad-laden Hulu before they offered that option, where any prolonged bout of streaming would show you the same ad over and over and over.

Video services need to be disrupted in a way that sticks (not like Amazon adding ads after selling everyone on an ad-free service). The only service I use with any regularity now is Netflix since it has no ads, but even they sometimes have problems with finding the show you actually want to watch.

I was on vacation recently and the room had DirecTV. I tried searching for a program and all the search results were for channels not subscribed. Several of the channels in the guide list were presented as if subscribed, but then when a show would start, would prompt to charge $6 to continue watching and the show would stop after a few minutes. Finally, I found a channel that was subscribed and not PPV, and when an ad came on 30 seconds later, I tried to turn off the DirecTV box.

Here's the DirecTV dark pattern: there was a "Please Wait..." message on the screen while the ad played instead of just turning off the output! How can anybody actually be making money from TV ads when they are so obnoxious?

I'm pretty sick of corporations double-dipping in every industry. Video services charge you for watching, and then sell you to advertisers. Supermarkets charge you for products, and then sell you to manufacturers. ISPs charge you for bandwidth, and then try force video services to pay as well. Where's the exit to this hall of mirrors?

Speaking of Netflix, how about them autoplay video ads for Netflix original shows?! Can stand that shit. I'm on the edge of cancellation every time I see video previews autoplay!
Previous discussions:

I'm not sure why this appears here on HN and received so many votes.

Despite being a good site, it has been last updated in 2013:

I sent them 2 dark patterns in the past which they didn't put; in an email I received long time after inquiring about it one of the developers said they're under the pump and will get to it sometime. And they don't.

Technically, it was last updated in November 2015 (at least if the date here is correct):

Good luck figuring that out though, since the what's new page hasn't been working in pretty much forever and the only way to see if something has been changed is to check each category individually.

And yeah, it doesn't update much. I remember sending in my own examples before, and those never got added either. Kind of wish there was a site about this with a more regular update schedule or something.

Spirit airlines is a user of dark patterns. But its almost like a game on their site trying to avoid all of the up-sells!

I may stay away from LA Fitness just because of this article.

My recent experience with LA Fitness, or City Sports, as they call them here in the SF Bay Area, have been similar to what the video describes. However, it was not as painful as I'd imagined it will be. I went to my account page and clicked on the Cancellation Form link (not necessary but recommended). After several screens I got a cancellation form for my account which I printed and mailed using USPS Certified Mail. After two business days I got an email stating that my account will be closed and when my access privileges will end.
I quit one in Ohio in person. They just printed out a form to sign and put the end date.

I suspect this works better on people who sign up and then never show up for four months.

Spirit Airlines is an interesting case, though, because like someone else said, they're very up-front about this. Is it still a dark pattern if the company admits it and essentially warns the user?
The unmissable warnings don't really happen until you get to the terminal though. Admittedly, I have not used spirit for some time. But the signs that say "we don't really want you to have to pay $100 for the carryon you forgot to pay for on the website, but you didn't pay for it, so tough shit" don't seem terribly genuine.

I will never fly spirit again, though. They just straight up cancelled a flight of mine an hour and a half before the flight for literally no reason, I got no notification via email/text or phone until I went to the airport to check my baggage (another thing I hate doing, but I was traveling for a pool tournament and you can't carry pool cues on a plane...). I had to book a last minute flight with another airline and my total airfare ended up being almost double what I paid for spirit.

I've flow then once. Both the flight there and return were over an hour late. They had no automated bag tagging terminals, so you still had to wait in line even if you checked in via phone.

The prices were just crazy too. They depend on people overlooking fees and getting screwed over to make them profitable. I got away with paying nothing over what I was quoted on the website. I will still never fly them again.

The fun part of spirit is that it doesn't end on the site, you have to dodge the dark patterns on the flight too!
At least they're pretty honest with it. Every time I walk by their counter at the airport I laugh at the sign because it's like, no frills but you'll pay for everything!

In a way I kind of like that model - just not on an airplane.

I kind of disagree, I want it to be that way in theory, but in reality it is rather predatory. A no frills flight for low budget where you pay for everything (and debatably more than an equal seat on another airline) and then they pressure-cook the passengers for a high interest credit card. Their pitch went on forever, way way longer than the usual "signup and get extra miles".

I watched a bunch of folks sign up on the flight and it made me feel really badly; the same way that check-cashing places scam their, mostly not-affulent, customers. These folks are even more vulnerable to these kinds of dark-patterns.

> they are not mistakes, they are carefully crafted with a solid understanding of human psychology, and they do not have the user’s interests in mind

I dispute this. I work with someone who is a marketing person and very much drawn to dark pattern rubbish. Most recent incident is a good example - a sales promotion where something is added to the cart if the customer buys a certain product. I pointed out that this was a 'dark pattern' and made sure my boss knew that such an idea is illegal in the E.U.

For me the illegality is not something that scares me, I doubt I will go to jail for writing the code, however, using a 'dark pattern' is a problem for me.

I like to think that I am a customer focused person, my marketing clown certainly is not. In fact he cares not one iota about any of the customers, his world view is selfish.

So, I point out the illegal aspect, next thing is that he wants the items given away. I don't see how that makes our products look good and I have no idea how to make money out of making a product and then shipping it to them for free. So again I am not sold on the priority of the project.

Returning to the 'selfish' aspect, my marketing clown does not code or appreciate the effort involved in making the auto-add work. I can do the code for that and think I could get the MVP of it done in a day, with some testing after that. Then there is the thinking through of the unintended consequences - I imagine that we would get plenty of customer service emails if there was a problem with the offer. The UX is also not thought out. I am sure that I could spend all day getting the message to the customer sorted on the website and emails, but if I didn't do that then the whole thing would certainly be 'dark pattern'.

There is nothing clever about my selfish marketing clown and his naive ways. However, he gets a performance bonus based on 'customer acquisition' metrics that the rest of us don't get. He has an interest to not care about anything other than his Google Analytics nonsense, customers, rest of the team, the company making money matters not.

Although anecdotal, this is how 'dark patterns' happen - marketing clowns, their selfish ways, their inability to understand the problem space (because they don't do code or customers) and workplace bullying make these things persist.

Your experience seems to support, not refute, the idea that these are conscious decisions (albeit at a marketing, rather than coding, level, which I think is probably what the original meant) rather than inadvertent bugs.
I don't get the impression GP's disagreeing with the "not mistakes" bit, only the "carefully crafted with a solid understanding of human psychology" side of things - they're deliberate, yes, but more often than not they're clumsily-applied snake-oil, not the evil-genius stuff they're pitched as.
> not the evil-genius stuff they're pitched as.

They're refined by evolution-- ones that make money get kept, ones that don't get removed. That is about as much genius as is really needed.

Better, in fact-- if no one really knows whats going on then no one can have any pangs of ethics, no one can turn whistleblower after getting fired, etc.

Thanks for that. Perhaps what I see in the anecdotal 'marketing clown' is a lack of human empathy, which is kind of the foundation stone for understanding 'human psychology'. Some people lack the wiring to care about others and see things from their point of view, whether it is the customer or the workmate. They also do not pick up on basic psychology, in my example there is little appreciation of how upsells work - there are facets and nuances to this that anyone who works in retail gets to grasp.

So it is a case of these people not knowing what they are doing, far from knowing the customer psychology and deliberately deceiving them, there is just no thought beyond doing some silly marketing campaign for the month end results.

What is also wrong with dark pattern is that the customers have to be churned - they will not be coming back after the customer service disaster that goes with the sale.

I don't care about quarterly results, I care about building a business that does not need marketing beyond word of mouth and white-hat SEO. So, in ten years time, customers will return for customer service provided to them, not to facilitate whatever silly offer is needed for my marketing clown's month end. I want customers to want to come back for a great product and great service, this is not compatible with 'dark patterns'.

We often give intellect where there is none. Notably in TV dramas where the 'killer' is supposed to be clever, in reality most people that commit crime really are not thinking at all, they have not thought it all through. 'Dark patterns' is a bit like that.

Here is another unsubscribe dark pattern:

* Do I check the item if I want to unsubscribe from it?

* Or do I uncheck the items I don't wish to receive?

50-50 chance -- which I'm sure they love. Clicking "Update" gives no feedback either, just reloads the page.

To me it looks like the checked ones are the ones to unsubscribe from.
The button text should read "unsubscribe" in that case, not "update", and the header text "Subscriptions". It could be a lot clearer.
That was interesting. My reaction was "is this actually a dark pattern, or just designed by a complete idiot?" Because the user trying subscribe and the user trying to UNsubscribe both have a problem in understanding what is going on there. Somebody should get an award for that.
This presentation is great. I think a good next step on the path toward ending unethical UX could be in creating an international ethics review board for it.

I know it sounds silly, but this is how a lot of decisions are agreed upon by many large organizations, and help encourage involvement and following the rules. See W3, ICANN, ESRB, IETF, etc.

The "BUXE", or Board for User eXperience Ethics (just my name idea) could be founded by a group of consenting UX designers, companies, and organizations. Together they would vote on and establish UX design principles that would be up for review every year or so.

The BUXE will accept fees for reviewing a website's adherence to their ethics and would give ratings to them based on how well they follow the guidelines. The resulting site can then publish their BUXE rating on their site.

Individual developers could be given honor status if they are particularly vocal or involved in ensuring the development of ethical UX that could be accolades for them to brag about (something important to developers). It's a good resume booster, anyway.

Plenty of other ideas.

This is great, I'd be 100% behind this.
While I'm familiar with the site already, I'd love to have an RSS feed of newly added submissions. Unfortunately, when I click on Recently Added I get a list with pattern definition links that all 404, and no link to the actual submission.
That and a api where you could `GET` and get a list. Would be great for building a nice browser plugin with.
That would be amazing. An educational plugin that could pop up notifications at relevant times to inform and warn users of known dark patterns would be very powerful.
There is no "works for everyone" method to stop this. You can vote with your wallet, and support vendors that act ethically, or act in whatever way you are OK with.

When this type of UI disappears from the internet, then you will know that the majority of consumers agree with your viewpoint. Until then, people keep buying those insurance upgrades, and not caring(if they cared, we wouldn't be in this situation).

If it all seems glib, that's because it is glib. People are taking advantage of other people, just below the threshold where those victims care enough to do something about it. This is the world we live in. I'm not sure how to end this on a positive note.

> When this type of UI disappears from the internet, then you will know that the majority of consumers agree with your viewpoint. Until then, people keep buying those insurance upgrades, and not caring(if they cared, we wouldn't be in this situation).

In a situation where you are legally compelled to buy insurance and essentially all providers do this that's completely wrong.

Huh? What type of insurance are you talking about? I'm talking about airline flight insurance, like in the article.
OK, so I got that wrong. The airline industry is hardly an example of robust competition either.
Found this gem while looking through the comments on YT. How is this even legal?
Thank you for sharing - now I know never to ever use G2A Shield.
Does support api query? There ought to be one available so that various tools can be built. For instance, a browser plugin to warn users upon visiting dark pattern websites
Take it further - introduce a "darkness rating" like Google's old pagerank value. I wish Google would punish websites for doing this stuff.
They started punishing those mobile web "install our native app" interstitials about a year ago, hope there's more stuff like that coming.
I wrote an article about this. Basically explaining the difference between designed inconveniences and deceptive patterns:
I've been pondering the recent trend in pop up ads where if you try and dismiss it by clicking the tiny, hard to find, checkbox, and you miss, it will cause it to move. This forces you to actually pay attention, to a degree, to the ad, rather than habitually dismiss it.

These are usually found in ads, or notices like the New York Times puts up notifying you there are only so many free articles left.

I think of this as a gray pattern usually, as it is designed to keep the source of revenue going to fund the sight you are currently reading. It's a surprisingly effective innovation.

Some of us always hit back on those pages. No way am I going to read your ad. Try too hard like that, and I will leave the page rather than ever look at it to find the close button.
uBlock Origin. I don't allow any ads. I'll donate to a Pateron or buy your T-shirts or a book if I like your content, but fuck ads.
Pricing pages are another type of U/I that could be label black hat. Personally, I think the typical pricing "tricks" like anchoring, bundling, freemium are fine and part of running a business.
This definitely reminds me of the process of deactivating your Facebook account. You really have to read what's on the screen in order to choose the right options.
Where do we draw the line between "dark patterns" and "smart design?" I feel like the Hacker News community could be advocating for a persuasive pricing page design one day and decrying its dark design pattern the next. The obviously evil patterns are easy to avoid, but it's difficult to distinguish between appropriately persuasive and inappropriately manipulative in the grey-middle.
Is the use of a tech shift or form factor change as an opportunity to redefine the product so as to reduce customer freedom or introduce surveillance a dark pattern?

I've thought of this with the mobile revolution. You could never have introduced total device lock down and ubiquitous telemetry so easily in the PC era. There would have been an outcry. But change the form factor...

Quora does a ton of this in their email newsletters to try and reactivate you! Want to unsubscribe? Okay! We'll just make up a new "round up" newsletter and re-subscribe you. So desperate to stay alive I guess...
Isn't part of the problem that "conversion rates" and "engagement" now trump just about every other metric? Anyone know good ways to quantify annoyance rate and off-puttedness?
I am surprised there is nothing about comcast on there.
Maybe it's not an issue about the designers. Maybe the problem is that most marketing people are "black hat" by design.
Has anyone tried to ignore iOS updates recently?
Those popups got so annoying that I had to block Apple's update servers. No popups ever since..
So annoying I'm buying an Android next.
These are product updates and often times contain security and stability fixes, nobody is trying to trick you here. Apple is proactively trying to keep their users up to date, I see no problems with this.
I'm not talking about minor releases here... Avoiding an upgrade to a major release (iOS 10) triggers an unavoidable antipattern: Every 24 hours iOS asks if you want to upgrade now or remind you later... but that's not the worst part... At times it will show what looks like the lock screen but if you enter your passcode the OS will auto-upgrade the next time you are plugged into power and on WiFi. If you aren't paying attention, you think you are simply unlocking your phone instead of authorizing the upgrade. I have no problem patch releases, but I find that major releases over time eventually bogs down older hardware.
This is just PR speak to cover for the false dichotomy that you must accept new features (and regressively slower OS upgrades) to receive security updates - or be vulnerable. In reality, Apple could easily back-port security fixes. They are abusing security to force undesired changes to functionality.

Edit: Take a look at how any Stable/LTS Linux disto handles this. If a bunch of hackers can do it on such a diverse software stack, surely the company with the largest cash reserves in the world can figure out how to do it on a software/hardware stack they control completely.

and at the top of the list of Bait and Switch is "Microsoft: Windows 10 Upgrade".

That's what got me, after the hundredth time it had appeared I clicked the X instead of . It the tactics of criminals.

That Ryanair example is outrageous, and not at all surprising
I think labor unions would be the best option. Put their employers into strike, DOS their website, and contact the press. No one will feel bad about the scammers.
I submitted quite a few of these.
My pet peeve: If you add a credit card to the Uber app there is no way to remove it without replacing it with another valid payment method. If you google solutions to this you get third party recommendations to plead with their customer service to have it removed. Seriously?

This customer-hostile approach really needs to be killed.

This might seem obvious, but why do you want to have an Uber account with no credit card attached? Usually SaaS require you to have a valid credit card attached always. I don't see the issue.
You can modify your billing address to some nonsense. Payments shouldn't go through with a wrong billing address (although, I never tried it).
Really, I would be equally happy to have them remove my Uber account altogether. Except it's equally cumbersome and non-obvious. There are no immediate actions to do it from within the App or the website:

Is this even legal? I'm in EU and I was able to remove my credit card. Maybe it's an American thing?
I'm in the EU and I was not able to remove before doing that trick with a virtual CC. What country are you in? (I'm in Sweden.) It's quite possible that officials/bank people in your country made noises about this and they white-listed this particular country.
From Spain but when I did remove it I was physically in the U.K if it makes any difference.

Although, if I think of it, I have also PayPal tied (but you can cancel that from their interface). Maybe Uber thinks I do have a payment method and they don't know I'm not allowing the charge from my PayPal anymore.

PayPal does this too last time I checked (admittedly about a year ago).

It is frustrating as fuck.

Use 1111 2222 3333 4444, the credit card test number. This is valid at the check digit level, but known to all banks as a test number to be rejected.
Thanks for the tip, that may be useful in the future!

I ended up using my bank's "virtual credit card" service to create a virtual CC with a balance of 1 SEK to get rid of my Uber account. Anyway, I think this is shameful of them.

I would be ashamed of this practice if I worked for them. There is no excuse. You can't blindly blame it on A/B evaluations and what ended up making the company the most money. It's simply unethical.

> I would be ashamed of this practice if I worked for them.

There's no shame because there are no consequences. "Oh, you were the guy at Company X that wrote that annoying Dark Pattern Y, huh? Can you walk me through the ethics of that?" - Said no interviewer ever.

I was just discussing this with a former co-worker. [Here in Europe.]

Also: I'm pretty certain that if you had a history of using dark patterns.. that would be seen as basically fraudulent behavior here. Investors would stay far away.

We both shared our distaste for a typical american way of accomplishing personal financial success - fake it til you make it, etc etc.

Maybe this is a part of what sets SV apart from Europe - and why SV keeps winning :). Fraud works.

cough Volkswagen cough
Sneaky germans.
I suspect there are many interfaces that would reject this number, as it's not a visa, master card or amex (which start with a 4, 5 and 3 respectively).

5105105105105100 and 4111111111111111 might be better alternatives (other test numbers that also pass the luhn check).

And when using email addresses to test your software always use [something] instead of [whatevs] because the people at love getting your random test data.

/sarcasm Don't do this.

My question is how well these dark patterns work? People who have or must have implemented them, do you have any data?

For an example join newsletter pop ups you get on websites. I assume everyone pisses off and closes them, or do they?

I've had the opportunity to speak to quite a few people who use the modal popups for newsletters and the like. They do get used. A lot more than you might imagine - and often enough times to warrant pissing off some small portion of your users.

This may vary on the audience and my anecdata is where N = 5~ devs.

I worked for a large e-commerce store. We implemented pop-up email newsletter opt in and saw 4x growth in our mailing list. Pop-ups work, they suck from UX perspective but work nevertheless.
The problem is that they do work within the defintion that the growth hackers are working with. They do not measure the people it drives away and never return.

Think of it as similar to the mall kiosk people -- they dont care if you are offended, you probably would never have bought their product anyways.

Also similar to spammers who now send emails that are so stupid that you think no one would ever click them -- except the small number of people who are so naive that they do -- which is what they are trying to select for.

Which points to age old problem of any public network: spam.

This is the 45th time I see this site in HN... How do people not know about it until now?
An awful lot of the content on this website shows naivety/lack of understanding of the website, and in a few cases, displays information which is simply untrue.
Think you could be more specific ?
Yep. - the $10 arises because the option for including edge servers in Asia is selected - I'm pretty skeptical that this had any malicious intent. It is showing the cheapest option in the column. Often the customer will already know which class they want to fly in, so it helps them to be able to skim down a column looking for the cheapest option. - I think this is seriously reaching. Every single pizza chain (both online & in-store) work this way. They do a lot to point you in the direction of their offers page too. - Hilarious. If this website had any idea how much negative impact price comparison sites have had on car insurance in the UK, they'd be praising Direct Line for not lowering themselves to the tactics of all the other insurers (offering an incredibly unprofitable first year rate, then massively increasing pricing when you renew)

I'm sure there are more - these are the most clear ones after skimming through about half of the categories.

I mostly agree with what you have pointed out.

The one exception is the British Airways site. That page is very confusing. At best is awful UI. At worst it was created to trick the user into purchasing a more expensive ticket.

I totally agree that the UI is absolute dogshit, but I don't think I'm willing to definitively call this out as a dark pattern when it definitely could be shitty design.

That's my problem with this website - it would have more impact if it was more honest/genuine and only called out websites which are definitely 100% dark patterns. Or perhaps they could show questionable websites further down under a slightly different heading - to show that they do recognize it's not black and white.

Does it count if there's no user interface? Amazon has my email address, I've been a customer since the late-nineties. They keep inventing new email lists and signing me up for them. Each time I get a new "newsletter" it says something like, "You got this message because you're subscribed to the 'Tablet News' newsletter." I click the unsubscribe link to remove myself from it. Along with the unsubscribe link there's a link to my subscriptions. When I go there, it only shows the ones I want (the specific authors I'm following). I want to unsubscribe not only from this latest list you just signed me up for, but also from all future lists you may want to sign me up for. I really don't want to get any unsolicited marketing email from you. Really. I don't want it. Please let me out.

Also stop letting marketplace sellers email me begging for feedback after every marketplace item I accidentally order. I try my best to not order marketplace seller items anymore but when I accidentally do (or buy a gift for someone that is only offered this way) I always end up getting emails from these guys. Are you sharing my email address with them? Does unsubscribing or responding to them share my email address with them? I have no idea. There is never anything useful and it's impossible to unsubscribe from all past and future marketplace emails which is really annoying. Come on, amazon, I really want to love you and continue shopping there but it's getting to the point that I'd rather go to wal-mart! (ok not really)

For what it's worth, marketplace sellers don't have access to your email address, Amazon relays email messages through their servers.

I am a seller on Amazon. I didn't think I have access to email addresses but you got me curious. I just went into an order and clicked "contact buyer." It gives a contact form that has the receivers address as something like [email protected] with a note "IMPORTANT NOTICE: When you submit this form, Amazon will replace your email address with one provided by Amazon in order to protect your identity, and forward the message on your behalf. Amazon will retain copies of all e-mails sent and received using this service, including the message you submit below, and may review these messages as necessary to resolve disputes. By using this service, you consent to this action."

Personally, I don't contact my buyers at all ever unless its a reply to a question they asked me.

On the email front, I've been getting bizarre emails from Trulia about 1-2 times a month for the last six months. I don't open them but the subject is "1 new rental available in $(my town)." I own a house and I don't remember giving Trulia my email address ever even when I was apartment hunting many years ago. This only started six months ago. I wonder how I got on that list?

Going forward, you can append "+whatever" to the username portion of your email address, e.g. [email protected], and gmail and most other providers I've used will ignore that part. Use it to trace the sharing of that address. I used that when signing up for a particular mailing list and found that my address was shared with about a dozen other marketers.
The shady places will just trim everything after the + anyways
Yeah, I’ve always figured as much. I use Yahoo’s disposable emails that don’t feature a “+” in them. There’s no way the sender can tell they’re sending to an alias.
This comes up every time and that's just too much of a pain in the ass for basically no reward. Do you think spammers are too stupid to strip out the + part themselves? Its not worth the effect to even open the emails so having the + doesn't personally help me. I'd have to worry about having a different email address for every service I use? no thanks.
> Do you think spammers are too stupid to strip out the + part themselves?

No, but they are only targeting gullible people anyway so they don't bother:

> Finally, this approach suggests an answer to the question ["Why Do Nigerian Scammers Say They are From Nigeria"]? Far-fetched tales of West African riches strike most as comical. Our analysis suggests that is an advantage to the attacker, not a disadvantage. Since his attack has a low density of victims the Nigerian scammer has an over-riding need to reduce false positives. By sending an email that repels all but the most gullible the scammer gets the most promising marks to self-select, and tilts the true to false positive ratio in his favor.

This is also the reason for bad spelling/grammar/etc -- most of it's probably mostly intentional.

(Which is really depressing and shows just what awful people these spammers/scammers are.)

People have learned that you can simply strip that part out of the address... it's in the standard. If you want to know where your data leaked out, you should use a different email every time (e.g. use your own domains).
Using Postmail's virtual aliases and a nice web interface to manage them, I create a random virtual alias ([email protected]) associated with a site name for every place I have to input an email address, it makes spam tracking pretty simple.
I would love to see a spam-shame site made by people who do this, outing the companies who sell or trade our email addresses.
Maybe + as a whitelist rather than a blacklist.
I'm still surprised that so few people know about it's been around for 15+ years and offers the exact feature you're talking about. You don't even have to create the alias, it will be created on first use.
> People have learned that you can simply strip that part out of the address... it's in the standard.

I'll take "Common RFC 821/2321/5321 myths" for $300, please.

RFC 5322 and RFC 5321 are very emphatic that the local part has no semantics whatsoever except those given to it by the MTA. There is no semantics for what "+" means in the standards.

Not just Amazon, I am constantly clicking through those unsubscribe links but just get auto-signed up for new lists. Fuck you Amazon and anyone else that does this. I'll buy your junk when and if I need to. Leave me alone and stop wasting my life.
Click filter, add rule to delete from domain.

But really the problem got so bad I had to stop using gmail alltogether.

Moved over to and now I do not get interrupted anymore in my life.

Thank you email and IM and "notifications" but no thank you, if I ever receive a notification of any kind, that account will either get nuclear delete option or disabled forever.

Time to move to new email providers and new emails, and stop pretending email address is an ID, because its not, its a mailbox, when it gets full create a new one, and only people you care about and when you care will receive its attention.

Report them at
Please don't. All you're doing is polluting the data at Spamcop and violating their terms anyways.

Bulk email can be split into two categories: Opt-in and Opt-out. Opt-in is email that an individual requested or agreed to receive. Many legitimate mailers use opt-in methods for marketing. Individuals are responsible for reading and understanding a company's privacy policies and acceptable use policies (if applicable) before submitting an email address. If a privacy or acceptable use policy clearly states that signing up for the service results in receiving marketing or commercial email, then the individual has "opted-in" to receive email and that email is not spam.

A company emailing you when you have given them your permission to email you, but about certain topics you decide you don't like is not spam.

Obviously you need to "unsubscribe all" on the first mail before taking such an action on subsequent mails.
This practice is clearly not opt-in! There's no button "please spam me with all future newsletters we invent" you have set yourself.
Permission is granted company by company, not topic by topic. I get that "spam" has become throwaway shorthand for "email I'd rather not receive", but that is not the definition used by law and by antispam groups.
Never click a spammer's "unsubscribe" links -- that just tells them they have a valid account. If you use a big email provider like gmail, flag them as spam. Otherwise, make a rule in your mail client to automatically file anything from their domain to the trash.
I don't think spammers care about the difference between an email they have and an email they have that clicked a link. They're going to spam both anyway.
I'm pretty sure they do care. They're renting out a bot-net at some $/message, and only make money when someone actually clicks through to buy their "p3n1s pills," so knowing that a real clicking human is associated with an address matters quite a bit.
I've had good success with unsubscribing from legitimate mailing lists using the links at the bottom of an email. It's not the company's fault if I unthinkingly subscribed to their newsletters - they shouldn't have all their emails forever tagged as spam for my mistake.

Real spam however, rarely makes it into my inbox - it gets filtered out and deleted without ever being opened.

> It's not the company's fault if I unthinkingly subscribed to their newsletters

If they're not confirming your opt in they're spamming you, especially if your opt-in is the result of a default-checked check box.

> It's not the company's fault if I unthinkingly subscribed to their newsletters - they shouldn't have all their emails forever tagged as spam for my mistake.

It is. Default checked "subscribe" boxes during sign in, hidden settings, new lists which you are auto subscribed to; it's a never ending battle and the incentives are wrong. If there was no unsubscribe link but only the spam button, publishers would be much clearer about these things.

Ironically, the unsubscribe link has probably led to more spam, rather than less.

Half the time, clicking the unsubscribe button leads you to another form that is almost impossible to comprehend, which results in (maybe) unsubscribing you from one list, but subscribing you to fourteen other newsletters and update and special offer lists, for a net increase in spam.
If the email says "click here to unsubscribe", then it should do exactly what it says. Taking you to a page that allows you subsequently unsubscribe is not sufficient.

And yes, you should only ever follow links for companies that you are confident are not spamming you out of the blue, because of the danger that you are just confirming your email address is active.

My other gripe is that I'm not sure how anyone is going to tell that I have clicked on a word in an email, given that it's displayed on an xterm. But if they say that by clicking on it I am unsubscribed, it's their problem to make sure it happens.

I've also experienced this several times. I just went through the wave of shopping emails that come from Black Friday in the US, and found that most of the time I only unsubscribed from the company's "Black Friday" list. It's a shameful way to get around CAN-SPAM.

Companies know there's a risk of unsubscribes with every email they send. If they have several lists, they ought to show all lists you're subscribed to, with an option to unsubscribe from them all. They might actually keep some legitimate subscribers that way.

> Also stop letting marketplace sellers email me begging for feedback after every marketplace item I accidentally order. I try my best to not order marketplace seller items anymore but when I accidentally do (or buy a gift for someone that is only offered this way) I always end up getting emails from these guys. Are you sharing my email address with them? Does unsubscribing or responding to them share my email address with them? I have no idea. There is never anything useful and it's impossible to unsubscribe from all past and future marketplace emails which is really annoying. Come on, amazon, I really want to love you and continue shopping there but it's getting to the point that I'd rather go to wal-mart! (ok not really)

I've begun adding 1-star reviews when I get requests begging for feedback. Seems like the only thing I can do to discourage the behavior

I've noticed LinkedIn doing the same thing. How many times do I need to visit their site and unsubscribe from every list before they get the picture?
Good idea to avoid amazon.

Did you try unsubscribing from all their newsletters?

No. These seem to deal with things without consent?
In general, I'm disappointed how Amazon went from being a store to a "marketplace". Sometimes I want to buy somebody who'll actually curate their list of products and has a reputation to stand behind that they don't want to sully by selling garbage or price-gouging on items they don't normally stock.

But maybe that's just me.

I buy a lot of used, old, obscure things (mostly books). Very often, Amazon is the only place I can find it reliably; my other options would probably be asking my Japanese/German/etc friends to dig through second hand shops and mail it to me.

I love the Amazon marketplace. (I also hate the 3rd party seller feedback emails but I have strict email filters so who cares)

It's not that I think the Amazon Marketplace is a bad idea, it's that the way the Amazon Store and the Marketplace are conflated by Amazon that's the problem.

eBay and AliExpress do what Amazon Marketplace does better than Amazon Marketplace, but Amazon wins because they've leveraged their success as a store for it.

> I buy a lot of used, old, obscure things (mostly books).

I find AbeBooks to be very good for that. Ironically, they were acquired by Amazon, but they are still a separate system.

If you are interested in books from the German-speaking region you should also take a look at (now owned by Amazon) which is the largest marketplace for professional used book sellers.
The marketplace works out very poorly if one doesn't have Amazon in their country. The info about shipping abroad is very much buried into whatever, and Amazon IIRC mandates that purchases can't be combined: if I buy two $0.01 books, FROM THE SAME SELLER, they will be shipped separately at $4-6 shipping & handling each.
Sometimes called "Dark Patterns"
Perhaps one should really call them what they are, scams.
From [1]: "A Dark Pattern is a user interface that has been carefully crafted to trick users into doing things, such as buying insurance with their purchase or signing up for recurring bills."

This very clearly IS a dark pattern.


> trick users

The important bit here. There's no tricking going on. It's all clearly stated when signing up for the Free Trial. You're just being obnoxious.

Yes there is trickry. A "Free Trial" does not mean 10 days for $300 which is what it essentially is whenever someone forgets to cancel. This is the same technique slimy online businesses (i.e. credit reporting agencies) use all the time along with making it hard to cancel. They even made it quarterly so that they can make three months off of you instead of one.

If you think I'm being obnoxious you should perhaps reflect on your own behavior. Others in this thread clearly agree that this is a dark pattern.

I think they are wrong just like I think you are. That's no argument.

You're defaming one of the most beautiful services that has ever come out of YC. A service that will help the life of so many hardware hackers and professionals. The pricing model for InstaPart is the most noble I've ever seen and you're complaining about them being predators and dark pattern practioneers?

I think that's still really stretching. "In fact, it's only the lowest price in that ticket class".

If I saw those boxes across the board, like in their example image, it doesn't take a second thought, it's "obvious" (to me anyway), that those are the "lowest, per class".

Yeah, that's why I described it as an examples of a bad design choice that may be accidental rather than a dark pattern. I do still think it's misleading, though.
I'm happy to see mainstream technews bringing this topic to a wider audience. As I'm sure most HN regulars are aware, we've had many high ranking posts on this topic

For those wanting to go right to the source:

Things were pretty cool until 1994. Then Canter and Siegel unleashed one of the first commercial dark patterns:

It's been a race to the bottom ever since. Dave Egger's fine and breezy novel "The Circle" illustrates the ethical issues quite nicely.

"A Dark Pattern is a user interface that has been carefully crafted to trick users into doing things, such as buying insurance with their purchase or signing up for recurring bills." [1]

Presenting an authentic looking UI dialogue which appears to come from the operating system, and using vibration to further impose the authenticity of a dialogue as having come from the operating system seems very much to be covered under any definition of dark pattern I can find. The aim is to make you click the ad, and they use dark patterns to do that. Simply because the ad content is "malvertising" doesn't mean it's not utilizing dark patterns.


I checked out that site before commenting. All of the examples they use are of sites that employ the so-called dark patterns as part of a larger, otherwise legitimate site. So while your ad fits the definition that they offer I believe that, given the examples listed, the label is applied more narrowly. This might just boil down to descriptivism vs prescriptivism.
None of that is a dark pattern.

A dark pattern is an UI design that motivates users to use the program/site in a way profitable to the site, but contrary to users' interest. To trick people, see Redirecting to the whole page instead of the link is just a hotlink protection (as debatable as that is for an image hoster). Animated overlays in mobile is just shitty advertising and UX. But no dark pattern here.

I saw that multiple times here on HN now that people misuse the term. It's a pity, the original idea is something to be aware of, and diluting what dark pattern means hinders that awareness.

> Redirecting to the whole page instead of the link is just a hotlink protection

Sounds like something that's profitable to the site but against the users' interest to me

But not a deception triggering an action on the user side.
The deception is really in the URL. If I see an address ending with .jpg, I expect it to load a raw JPEG, and I have some implicit expectations following that that are being broken if the site serves HTML instead.

Also, it's actually much simpler than the discussion makes it to be. Just look at the big picture. The user wants a picture, free of irrelevant or harmful bullshit. The site promises that raw image, by means of providing what looks like a direct URL. Then, it does not deliver on the promise, serving the irrelevant and/or harmful bullshit along with the picture. It's pretty clear that one side is trying to exploit the other one.

Here's a screenshot of their Share UI:

There is something labeled very clearly as a "Direct Link." When I paste that link into my browser, I go to the image file.

If people are posting the Direct Link and it is indeed redirecting to the full site (per the claims), then that is most certainly a very dark pattern if not an outright example of blatantly lying to their users.

So in this case it is very much about the UI and what is presented to the user at the time they are grabbing the URL.

That's actually a good point and argumentation.

I tried to explain above of why that does not feel right to mix it in as dark pattern, but it gets muddy and based on the votes people don't seem to understand it, or just don't find it convincing. Short, adjusted for here: I'm not convinced people would not share and click those links anyway even if it were correctly labeled. But the widget looking like this is a good point against that argumentation of mine.

> if not an outright example of blatantly lying to their users.

That of course. Hotlink protection for an image hoster when even providing official direct link is in any case a deception, maybe a lie. Please don't mistake my argumentation against the use of the dark pattern term as a defense of the behavior.

> A dark pattern is an UI design that motivates users to use the program/site in a way profitable to the site, but contrary to users' interest.

Doesn't this definition cover all advertising ?

No. Advertising can also just provide information. It is also not an UI.
It's a UI element and a UI with just one element is an UI already, so an UI-element is itself an UI. Let's argue semantics on a Site for Hackers.
Advertising is a verb (or a genre). It is not an ad. An ad itself can use dark patterns (good example: the skype ads looking like UI elements) – but that does not make advertising itself one.
Ad is short for advertisement, but I pull the "I'm not a native speaker" card.
using reddit on mobile when viewing imgur content was horribly painful with their app spam covering the top
That is quite possible :) I'm not disputing that.
Having links that used to be hotlinks no longer be hotlinks is a dark pattern; also URLs that look like hotlinks (ending in .jpg or similar) but aren't. I'd argue that serving ads sometimes but not always is a dark pattern (particularly if they use cookies or similar to never serve ads to the original uploader).
Again, no. A dark pattern is an UI that makes the user do something he does not want. The best example is linkedin trying to get people to spam their contacts with invites. The pre-checked "subscribe to the newsletter" checkbox is a dark pattern. Same is the checkbox you need to check to not subscribe to the newsletter.

A hotlink is no user interface, changing its behaviour is not a dark pattern. Serving ads never is one (though ads itself can use dark patterns), regardless whether the uploader sees them or not. Those are different kind of tricks that have (edit: almost) nothing to do with what the term dark pattern describes.

Edit: I'd argue that links that used to be hotlinks not being hotlinks can maybe be part of a dark pattern. If an UI tried to get people to share a site, and the people do that only because they think the links are hotlinks, then that UI could be a dark pattern and the non-hotlinks a part of that. But the dark pattern is then the UI presenting the hotlink, the "share this image directly" widget, not the hotlink itself.

> A dark pattern is an UI that makes the user do something he does not want.

You mean like clicking a *.jpg link and getting something other than a raw JPEG file?

I just disagree with your definition of dark pattern then
It is not mine, it is the definition. Have a look at, the video and the library, to see how it is defined.
I think that was his point, different people can define the same word in multiple ways. Just because you/your favourite source says one thing, doesn't mean everyone else agrees, for better or worse.
Actually my definition of "definition" is that it is strict.
Masters degrees, card carrying, secret-handshake knowing designer here. I remember when the notion of dark patterns was first discussed and differentiated from anti-patterns and I think this current discussion is quite ironic.

It's ironic because it is hinging on a narrowly constrained definition of "user interface" design - one that has been appropriated by web/app designers in recent years. UI goes well beyond what is IN the browser window. Discussion of user interfaces occurred before they were graphical user interfaces, well before they were web browsers. So for a discussion that hinged on the idea of the misappropriation of terms, this scores high for irony, IMHO.

SO, now that we're clear on the idea UI's can include many things, it's also clear that the URL and the items return based on the url requested absolutely fall under the notion of UI. And, this is most definitely an example of UI manipulation to create unexpected and negative results for the user to the benefit of the website.

Additionally I should point out, the whole notion of interaction design once meant something much broader than digital interfaces - as an easy example, look at Don Norman's early work and you'll see that interaction and interfaces go well beyond windowed interfaces or even digital interfaces.

Guess in what my masters degree is ;) Design of everyday things is one of my favorite books, and definitely the HCI book I enjoyed reading the most.

I'm aware that you can think of many things as being an interface. I even agree that heuristically switching the result page is deceptive. But there is a difference, though it gets hard to pinpoint it. The manipulation is on another level. It is not the same type of thinking as pre-selecting a checkbox in an installer [0]. I don't agree that it is an UI manipulation in the normal sense. I don't see which psyhcological effects are used, where the manipulation is. There is a deception if a .jpg link does not go to the image, but how manipulates that? Is the ending .jpg a something prone to be clicked on? I don't think so.

Still, while I still value the difference between a dark pattern and any random deceptive behavior, at least I understand a bit better now why people persist on mixing up the term.


The Psychology of Everyday Things (later retitled "The Design of Everyday Things") is indeed a great book. I call out the original title, first, to be a hipster but more to point out that his very aim was to talk about things not Human Computer Interaction. It is not a book primarily about Human Computer Interaction and your regarding it as such or at least lumping it in with HCI shows a serious misunderstanding of the larger point.

It isn't that you can think of many things as interfaces it is that many things are interfaces and interfaces were around long before they became graphical or even digital/computer-based. So, URLs themselves are definitely interfaces – they're UI's and machine interfaces as well, given their multiple roles.

For the concept of Dark Patterns, the manipulation you're talking about is, at its core, abusing convention, expectation and perception to steer people into an experience that they wouldn't choose if it were more obvious. So, essentially, dark patterns are deceptive.

What I gather you're asserting is that they must also be manipulative and get people to do something, themselves? By that bar, I think the imgur returning of pages instead of images when the convention is to return an image for a url ending in .jpg, etc. may be questionable. No URL request will ever rise to the level of nuance that a visual interface will. However, I still think this case fits. It is abusing a set of conventions and intentionally guiding a user into something they weren't expecting.

Additionally, think about the multiple use cases here. It's easy to focus on the casual browser clicking a link from Reddit but it's also about the user creating the reddit post. They are following convention, using what they think is an image link, choosing to post it, only to then unwittingly be involved in serving up that annoying as hell moving cat paw ad on top of the image they're trying to share with others. That sounds a little like a dark pattern at work to me...

> It is not a book primarily about Human Computer Interaction and your regarding it as such or at least lumping it in with HCI shows a serious misunderstanding of the larger point.

The primarily I did not say, did I? The book was required reading at the first university I heard a HCI lecture. It was recommended at my masters (in HCI, both the masters and the lecture), and contents from it were teached. I'm not sure about everyone in my current team, but I know at least some have read it, and I saw general references to its content. And those guys are pretty much the core of european academic HCI.

Everyday things has that role, as I could see, exactly because it is not talking about computers. It manages to show concepts and principals in a way that makes clear how they are universal. Besides, computer interaction does not happen only through display interfaces.

> I call out the original title, first, to be a hipster but more to point out that his very aim was to talk about things not Human Computer Interaction

Actually, the foreword of the 2002 edition I have open right now explains the change of title. It describes that it is because psychology in the title made it go to the wrong bookshelf in stores, in that people did not capture that it was talking about objects and design instead of psychology itself. Where do you have your explanation from?

> What I gather you're asserting is that they must also be manipulative and get people to do something, themselves?

Exactly. Or to not do something they'd usually do.

> That sounds a little like a dark pattern at work to me...

A little, yes. I stand by that manipulation is not enough included, and deception is not enough, and the link look is not enough. But like I said, I now understand why others are mixing it up – and the point with the different use cases might apply. I was serious in my explanation in another comment that there might be a group of people for who it really works as a manipulation, even though I can't see it working in a general way without prior conditioning, and then it fits the dark pattern definition reasonably well.

> Edit: I'd argue that links that used to be hotlinks not being hotlinks can maybe be part of a dark pattern. If an UI tried to get people to share a site, and the people do that only because they think the links are hotlinks, then that UI could be a dark pattern and the non-hotlinks a part of that. But the dark pattern is then the UI presenting the hotlink, the "share this image directly" widget, not the hotlink itself.

I'd say the URL itself can be UI - users know what a URL to a direct image looks like and how that differs from what a URL to a page tends to look like.

That does not matter. An URL alone does not entice users to do nothing. You need more to get a proper dark pattern, to get the "convincing the user to do things" part of the definition of what makes something a dark pattern in the first place. Please, look as well at the website to see what a dark pattern is about.
So you're saying if it's not on that website it's not valid?

Come on.. If I see a URL that ends in JPG, my understanding is that I am about to load an image. If the site then shows me a page full of ads with my image somewhere on there, that's exactly what you described.

The site is tricking me into doing something that is to their benefit and to my detriment. Showing me a URL that looks like a link to an image is absolutely encouraging me to do something. It's telling me "hey click here you'll see the image right away"..

I did not want to see a page full of ads, I wanted to see a single image, but now the site has monetized me without my consent.

It feels like you're arguing for argument's sake here.

Agreed, this fits the definition when in this context and with this intent
One last try. I'll stop then.

If you are really arguing that you are inherently more likely to click on the direct link, and it is this click impulse that is used to manipulate you, and that going to a page containing that image plus ads instead is the big negative outcome, then I understand why it is a dark pattern for you.

I did not see that direct link have a higher affordance (that might stretch the term a bit too much) to be clicked on. I still don't – but if you think that there are people that are conditioned to click on those direct links, but would not click on the normal link, then I'll have o give you the point that the heuristically changing of the result page might be a dark pattern for those people.

I'm not aware of that effect, but I can't be sure that for example on reddit for people without adblocker or on mobile for some time the direct link wasn't a positive click signal that conditioned them.

I think that explains why I'm arguing. Dark patterns are a manipulation, and simply showing another page is not something I can count as a manipulation – it is not the same thing as in a window making people click on the wrong button (even though I understand that there is a similarity if you follow a specific line of thinking).

"Showing another page" is not a manipulation? So the redirects of yore when you would change the link-text to a URL that does not match the link-URL is not a manipulation? That is effectively what imgur does.
Direct link is a significant "positive signal" for the following reasons:

- you implicitly expect it won't load code to your browser

- you implicitly expect it will serve the one and only one resource that you need

- you implicitly expect that after receiving the resource no further data will be exchanged between the client and the server

- you implicitly expect it will work well with the standard UI of your viewing platform - for instance, it will be pinch-zoomable on mobile, or zoomable with mouse in desktop browsers

- you expect to work well with applicable context; for instance, a direct image link should work in "href" attribute of "a" tag (resulting in an image being embedded on a website), it should work with curl or wget (resulting in a single image file being created on your hard drive), or just browser's "save" feature (again, resulting in a single image file being created)

I'd call breaking these things a dark pattern. A particularly nasty one at that, since it's poisoning the well. Breaking users' trust in that URLs do is one of the many subtle ways of fucking the Internet for everyone for personal profit.

'The pop-up design had been described as a "nasty trick".'

This is also sometimes called a dark pattern

This seems like a common pattern

1. Realize you need to make a lot more money (to meet investor expectations, pay for the fat the company has accumulated, etc)

2. Panic

3. Apply dark patterns (

4. Something better comes along and eats your lunch

5. Repeat.

It's sad because imgur was originally created as the "something better".

I'm sure they made mistakes but when you see massive growth like imgur did someone's gotta pay the sever bills somehow. Plenty of image uploading sites have come and gone because of server costs.
These are all good points but I have a strong feeling of déjà-vu with another thread in these comments...
Dear lamb, we had a great time but now I have to eat.
There have always been a few F2P games that aren't awful, and if there's starting to be more of them now, that's excellent news.

But the fact remains that 95%+ of F2P games build their entire business model around dark patterns[1], and as long as that remains true, new games will reasonably be expected to prove themselves better than that.

[1], if you're not familiar

I don't think that is a valid concern in this case. From what I understand, "dark patterns" is hardly an euphemism, and implies that the perpetrator of "dark patterns" is acting in bad faith. The only reference to it that I can find providing a definition ( says that dark patterns "are not mistakes, they are carefully crafted with a solid understanding of human psychology, and they do not have the user’s interests in mind".
It's a shortening of "dark design patterns", which in turn is short of saying "outright malicious design patterns". I'm not sure who coined the term or when it first appeared, but I find it an apt umbrella under which to describe a whole constellation of tricks and gotchas designed for unsuspecting and usually non-tech-savvy users.

So it means "outright malicious design patterns", but instead of saying that and conveying the meaning perfectly using relatively established and self-describing terminology, one can say two words less to make me look it up on a website. It doesn't seem like a win to me.
Except "dark patterns" are a thing and what the Globe does is exactly that:
Dark patterns

"I clicked 'update security' and it gave me Windows 10!"

Microsoft did bundle the IE banner upgrade advertiser with security updates recently, and that was uncool/unacceptable.

But I've never seen an update security button that actually INSTALLS Windows 10, and that is specifically what this complaint is about, the upgrade starting on its own.

As I said if there is a bug that causes upgrades to occur without permission then people need to provide production steps. If there isn't a bug and Microsoft is doing it on purpose then why is it only impacting so few people? And how can we exclude user error?

there's a distinction between user error and forced-on-purpose

that's what I was getting at.

For example, if there was some UI which says.

"Get the Newest most Secure version of Windows (Download Now)"

with some fine print which says you will be installing windows 10 as part of the security upgrade

would that be considered 'user error' or user-hostile-design?

The IE security upgrade with the Windows 10 ads went out in the same push that seems to have turned on more aggressive auto-upgrade behaviour, so lots of people -- including media -- are conflating the two.

I wonder how many people are going to disable future security updates thanks to the media coverage?

Ugh. I know it's just an example, but is one of those "borderline legal frauds", a step away from phone slamming. Opt-in renewal, online signup and "you have to call us to unsubscribe".

Now I think of as the literal devil -- it might be possible to get a good deal, but you really better know what you've agreed to.

>, I have no beef with pay walls. But before signing up, I read the fine print. [...] United went a step further, though. It asked me to select “yes” or “no” before buying a ticket and highlighted the “yes” option as “recommended.” Required choice plus a recommendation qualifies as a strong nudge.

For examples like those, I think the meme "dark pattern"[1] has more currency[2] and I wish the author chose to use it (or at least mention it) instead of expanding his "nudge" meme into "bad nudge".

An author will stick to their own terminology because of ego, or because of ignorance. I don't know which is applicable in R. Thaler's case so we can be charitable and assume it's ignorance.


[2] ~263000 hits for "dark pattern" and ~930 for "bad nudges""dark+pattern""bad+nudges"

Thaler comes from the behavioral economics world, and I'd say his work on "nudging", and even the term "nudge" predates the dark patterns site. His book nudge is from 2008[1], while the DNS info for dark patterns shows creation in 2010. His mentioning of the UX examples was probably a brief crossover example between worlds that occurred to him to highlight an instance, but I'd say it didn't occur to him to scour the software development UX world for existing terminology because, well, it's probably not familiar to him at all.

Popularity on a Google search of a software term is of course going to win the popularity contest, it's the Internet. Do the same search of economic journals and you'll probably find the bias toward "nudge" over "dark patterns" in an extreme way. Different ideas proliferate on different mediums.

I do strongly recommend checking out some of the Freakonomics podcasts[2] and blog posts that involve Thaler. Extremely interesting and useful across many disciplines!



Convincing people to buy things using psychological manipulation is what marketing is all about.
There is a difference between psychological manipulation and cultivating confusion or blatantly misleading. An ethics still applies to marketing's use of psychological manipulation that asks the customer to participate.
Arguing names is the boringest kind of disagreement, unless the name is misleading.
I'm not arguing names. I'm a live-&-let-live descriptivist when it comes to language usage by the masses. E.g. I have no problem with most people calling RC helicopters "drones", etc.

In this case, I'm pointing out that "dark patterns" already has a rich accumulation of psychological tricks on web transactions. Thaler can build on top of that shared knowledge. For example, people can go to and then click on "Browse Library" and learn many more variations of what Thaler is discussing. If a reader is only exposed to the phrase "bad nudge", he/she knows less instead of more.

Do people want to eventually converge on "bad nudges" instead of "dark patterns"? I don't care. But at the moment, "dark patterns" is the phrase that has attracted the bulk of discussion. (E.g. see previous cited google search links and HN itself[1])


"Dark pattern" is a phrase that's only been applied to web design, whereas Thaler's definition of "nudge" is much broader (he certainly applies it to offline design, and I think also to direct financial incentives).
Sounds like a good dark pattern, classic godaddy. Let them think they can do something until they've become highly invested in the process, then bait and switch for the real gotcha.

I didn't mean it to be a bait and switch, where they couldn't buy the product without Prime. I meant that seems like the step they should try to upsell.

The customer is already thinking "How fast can I get it?", so that's the time to say "You can get it tomorrow if you had a Prime account". Or, you wait 5 days.

Before they published this blog post, I was mildly annoyed at having to type "$0" to download Elementary, but otherwise I dug their goal of making a clean and simple-to-use desktop environment.

Then they published this post -- and mind you, this is not the original version of this post (the original was worse).

With statements like:

>We want users to understand that they’re pretty much cheating the system when they choose not to pay for software.


>we feel that an entire operating system that has taken years of development and refinement is worth some money.

...when all they've really done is:

* Build a stripped-down Gnome-family derivative Desktop Environment

* Build a custom-skinned wrapper around GTK that fits their custom DE (and expect app devs to fall in line with it)

* Write a few super-primitive apps in it (a Rhythmbox fork, a notepad clone, and a basic email client)

is just a -bit- overstating things, to say nothing of the difference between a desktop environment and "an entire operating system".

Then they say:

> Most of the open source world is similar; Inkscape and GIMP only get money for development if users decide to give it to them.

without the slightest ounce of self-awareness to realize that neither of those projects make conscious design decisions to trick users into thinking they have to buy them. If I see a $____ box, my first assumption is that typing "0" has the same effect as typing "-1" or "aaaa": a validation error. It's a very close sibling of the "Sneak Into Basket"[1] UI "dark pattern".

These are the moves that turned me from a fan/advocate of elementary to a critic: not that their software was poor quality (the parts they had actually built were, for the most part, roughly as nice a UI as most other Ubuntu derivatives had in my opinion), but that their philosophy and attitude made them the kind of people I want to actively avoid promoting.


Weird for the article not to mention

It was the first source I know of that started a compilation of these.

Yeah my first thought was this. Definitely remembering it from an article of the same author about the new UK directive
The article is by the curator of that website, as mentioned in the first two sentences of the author bio.

I'm not even sure if I'd call exit intent pop-ins a "dark pattern", let alone call it the most bullshit and annoying one, considering the examples listed on the site.

I think I'd argue the "sneak into shopping cart" one as being the worst.

Half our industry is built on fooling users. Exploiting their cognitive biases. Seriously, we understand where they fall, and if we really wanted to, we absolutely could look out for them -- at least, certainly better than we're doing right now.

Your response is like saying when Facebook was privacy zuckering (, the user clicks right through the settings that should have sounded an alarm.

What can Apple do? They could make a better browser (I like how Chrome and Firefox do things - you have to go out of your way to reach a page with bad SSL -- compare Safari's rather passive and enabling error message: vs. chrome's: -- you have to REALLY see and think how to access the site despite the warning, it's that good). They could be more vigilant in alerting users of where and how this can happen.

If we were talking about any other young startup, your apology might fly -- not so with Apple, they're sitting on billions, they have the resources to think of a solution and implement it.

I'll buy the argument that the industry has a duty to protect users, and also that Safari could be designed to better warn about SSL.

> Your response is like saying when Facebook was privacy zuckering, the user clicks right through the settings that should have sounded an alarm.

This is a bit odd, though. On one hand we have a company directly attempting to trick users; on the other, we have a company whose product is being attacked by a hostile government. Drawing an equivalence between the two is a bit ridiculous, no?

Yes that was probably not a very good analogy. I was trying to highlight the fact that we're really good at getting users to do what we want (things, specifically, that hurt them and make us more money). So far, we (the tech industry) have put a lot of effort into tricking them to do what we want, now maybe it's time to trick them for their own benefit, rather than ours.
The comparison was between a company who's web UI tricked unsuspecting/naive users into revealing private info to third parties, and a company who's browser UI is making it all too easy for unsuspecting/naive users to inadvertently reveal private info to third parties.

I think the two are quite comparable. In both cases, the software developer should be responsible for guiding the user to make the right decision.

Interestingly, up to now the only time I've seen an SSL certificate warning was a misconfigured server. This is the first that I've seen an attack throw up a cert. error (usually attacks leverage other avenues that don't alarm users). Microsoft research even confirms:

    "It’s hard to blame users for not being interested in SSL and certificates when (as far as we can determine) 100% of all certificate errors seen by users are false positives."*
Dang, didn't realize that the formatting cuts off the quote, here it is in full:

"It’s hard to blame users for not being interested in SSL and certificates when (as far as we can determine) 100% of all certificate errors seen by users are false positives."

People know ads are trying to manipulate them, they don't know that Facebook is actively trying to make them feel happier or sadder. Just because two things are similar does not make them morally equivalent.

People have been calling out various bad practices in conversion:

Emotional manipulation happens every day in all kinds of fields other than advertising, I wouldn't refer to all of these as "dark patterns."
> Emotional manipulation happens every day in all kinds of fields other than advertising

So it's not bad because everyone else does it? is an interesting read.
Related: - provides a list of common dark patterns with examples
To deconstruct means something like "disassemble" or "reverse engineer" - to reduce something to its constituent parts in order to de-mystify it. Quite a lot of posts on HN do this - explaining how some particular effect is achieved via clever hacks.

"Inciting suspicion" is about encouraging people to ask questions about why an interface works the way it does. Is it a dark pattern[1]? Why those options, why that layout, what is the person who designed this trying to encourage me to do?


This is totally and entirely deliberate on Facebook's part. It's kind of a mix between 'Zuckering' and 'misdirection': -

They could conceivably do something so that people aren't misled into this thinking, but they're not going to do that.

> Remember, if it was actually anonymous, you wouldn't need Facebook's help to implement it.

That sums it up nicely.

And actually, the thing is I could totally see Facebook enabling truly 'anonymous logins' and whatnot, because it can afford to do it at this point. It is too entrenched as the social king, no other competitor comes close to it, and it's not going to fall down anytime soon because it'll be buoyed by networks effects for quite some time... and so it makes sense to ease things up a little to improve their public image. But, the way they got there, to the top, was by using dirty and despicable dark patterns, like 'Privacy Zuckering': For this reason alone, I would stay the hell away from anything Facebook.

It truly is an interesting move. Last night my housemate was telling me about MS's moves in the office365 space, I was gobsmacked by what they're up to these days. I actually commented "time to buy some MS shares", pigs were flying. And now this..

It does make sense though. Developers only really want a way to verify the user exists and isn't some spammy bot. Verification like facebook login is the easiest way to go. The problem has always been - far too many people avoid signing up with facebook as it shares "god only knows what" with the site. This works around both issues and really is the only way they can become the single signon entity. Which is what they've been aiming for.

As for "not bothering to implement" (aap), if you already need to handle facebook connect login, its going to be minor to add anon login facebook handles all the identity stuff leaving your app clean of yet another verification loop which annoys the hell out of users.

Sure there is the "apps need that data to make money" situation, but as a matter of fact, the people signing on anonymously were a) going to create a fake facebook or b) signup manually and give you a mailinator address.

> MS's moves in the office365 space, I was gobsmacked by what they're up to these days.

Such as?

In excel they're using some pattern recognition so when you start filling down a column they just fill out the rest instead of waiting for you to use the fill down feature, I thought that was just plain neat.

They're improving collaboration in all products, ie bringing it on par with gdocs.

Something geospatial, dont recall exactly what, maybe it was to do with SharePoint or colo.

The most interesting was they're letting go of forcing ms languages for plugins and allowing you to write and import widgets/plugins as html/js so all those nasty vb scripts of old are going to get some much needed love from all the web devs out there.

I could be wrong on a few of these just what I recall hearing about, happy to be corrected if I misunderstood any of what I was told. Mostly it just sounded like, as a company, they're identifying what their core products are and innovating on them. As in strategically, they're getting their shit together.

Oh. Yeah. That's how SV works. (I didn't know about this particular story though... must have missed it).

It's basically just another variant of a dark pattern.

They're everywhere. It's basically what fast growth is all about. It's about skirting some rule that you know very well is illegal or at least unethical -- and later when you get caught doing it, saying "Ooops!!! I didn't know this was illegal, LOL!"

Go to to get more examples of dark patterns. Facebook in particular is expert at it, e.g. see - the clear lesson to take away for startups is, don't give a fuck, do whatever unethical shit you can do to attain a large userbase, get big, get funding, and then either a) deny any wrongdoing was ever done, or b) justify it with some excuse like "we were young :)" or "we didn't know!" Sometimes you have to pay a fine, but you can be assured that the fine will be smaller than the profit made/growth achieved by some particular action. It's a winning strategy.

p.s.: Here's a fun challenge: try to turn off targeted advertising on an Apple device. Can you do it? (fwiw, it's much easier to do on iOS7 vs iOS6).

Mar 15, 2014 · vonnik on Worse
Amazon's upselling and cross-selling methods are good examples of dark patterns: design working to harm its users.
> I don't actually remember doing it

This probably indicates a dark pattern at work ( ) - it was presented as a quick, default and normal action and/or of little consequence, when actually it's quite invasive.

You said that Linkedin would "scan your contacts to see if those people are on LinkedIn" and this is likely what it is presented as, but actually that information might be retained indefinitely and may be used for other purposes that are thought up later. But hey, it's just metadata, right?

The LinkedIn Android App will periodically ask to scan your Google+ contacts for people you might know on LinkedIn, so if you've ever used that you could have leaked contacts to them that way.
If they didn't use dark patterns ( such as always defaulting to your bank account instead of your credit card (which inconveniences me but makes them more money) maybe more of their people would use paypal.

This regards the web payment flow experience and not necessarily the app.

Feb 02, 2014 · aye on Honest Android Games
This is a really important site -- it's like for the Android gaming world. Many thanks to the authors.
Nice: First I would really prefere it, if my name was spelled right. That much of respect should have been the least to do.

But back on topic: Why should I (as fairly technically adapt person) have to search deeply inside the configuration, to maybe find a feature, that I expect to be active by default?

So sorry - Ghostery is so far off my radar nowadays, as I felt tricked and victim of a dark pattern [1]. And as I nowadays have a strict "zero tolerance" policy regarding sites/services/tools that act this way. Ghostery was, is and will be on my list of tools, that I would never ever recommend to anybody.

Btw.: I do not mind downvoting - as it shows me, I must have done something right:

"Methinks thou dost protest too much." (English proverb)


I'm glad you guys actually recognize the perverse incentives at work here and...

  "Speaking of perverse incentives, we’re often asked about
   our own. It seems that from the perspective of those
   paying us, Beeminder is providing a ton of value and a 
   ton of motivation and the occasional cost of derailment 
   is a fair fee for Beeminder’s service..."

  "in other words, Beeminder is putting itself on the map 
   for exactly one reason: it makes people more awesome.

   But that can lead to the opposite complaint — that 
   Beeminder’s sting is so valuable as to be 
   self-defeating. In other words, it’s hard to be 
   motivated by the threat of having to pay Beeminder if 
   you feel that Beeminder has already earned that money!"
... Ok that only serves to scare me more.

This is a nasty psychological game beeminder is playing. So is GymPact. When people feel they have failed or they are at fault, a part of them wants to provide recompense for that failure. Beeminder and GymPact are not the first to fit this business model. Cable companies do it with wildly obtuse rules, ugly restriction, and massive overcharge fees, all with the line "Well it was your fault, it's written in the rules right here!"

That's what I see to be the problem. Beeminder puts itself resolutely in place as the 'go to' to seek punishment, striking where humans are at their weakest. Of course, rather than hail Marys, the punishment is money.

The fact is, because beeminder makes its money through my failure, it has a monetary insensitive to bring about that failure by any means, real or perceived. exists for exactly this reason! On what grounds do I have to believe beeminder would be immune to such an influence? Because beeminder loves me and wants me to get better?

That is how I saw it, from the outside looking in. I liked the idea, I really did, but with beeminder standing to benefit from the pledge, rather than say, a charity of some kind, I could never trust them.

That's how Audible is doing it now, however it wasn't always this way. For reference, see
Audible's response (on that very page) is actually a textbook example of how to deal with a thing like this.
It is, and I'm glad they've changed it. Having said that, this happened to me in 2003 or 2004 - so this wasn't a simple UI goof that they fixed a few months later. Additionally, the process for unsubscribing at that time required a phone call. In any case, the experience left me permanently sour on their company.
Analogous: LA Fitness. You can become a member online, over the phone, in person.

To cease being a member, you have to send a certified letter to a certain address, and you have to make sure it arrives by a certain time (up to two weeks) before your next renewal date.

I read an interview where one worker said they literally had someone whose job was to throw away all non-certified mail, unread.

You're being disingenuous. I don't think it was at all obvious, and it took me at least 10 seconds to fully realize where these extra charges were described, and that's only because I knew it was there somewhere. It took another 5 seconds to find the link to the place where you can buy it as a "regular" member.

This is automatic and opt-in, and non-obvious opt-out recurring membership fee. It's scummy and the default should be to purchase as a normal member, then try to genuinely upsell me in the sidebar. It's the equivalent of being automatically enrolled in amazon prime when I go to buy anything from amazon, and having to click a tiny link to the right that says "don't enroll me in amazon prime, just let me buy this" if I don't want amazon prime.

The entire shopping cart is an example of a dark pattern. and I personally wouldn't use a site whose entire business model clearly hinges on tricking or ensnaring its customers.

JustFab is not a 'scam company' (in the sense that they may not be doing anything technically illegal), but they are using DARK DESIGN PATTERNS[1] to trick at least some people into doing things they don't want to do.

The checkout page[2], in particular, seems designed specifically to trick people into signing up for recurring monthly charges. Any person who adds merchandise to the cart and then clicks the big 'Continue Checkout' button -- without stopping to read all the surrounding text -- will unintentionally sign up for the $39.95/month "VIP" plan.

My mom, who is trusting by nature, would never stop to read all that surrounding text, because she has been conditioned by years of online ordering to add items to a cart and then find and click the big checkout button. She would be tricked into signing up for recurring charges.


[1] See

[2] -- this was posted by one of the company's investors elsewhere on this thread. It's a canonical example of a dark design pattern.

What they are doing in fact is illegal in most countries with strong consumer protection. Explicit and informed consent is not something judges and regulators take lightly.

And since they apparently operate in Germany, France and Spain, I strongly suspect they are breaking the law in one if not all of those countries.

A law has actually been passed in Germany to curtail these 'subscription services':

Correct, and indeed, this business model is nothing new or unusual in the EU in general, or in Germany in particular. Back in the early, pre-smartphone days of mobile, a lot of European companies made fortunes by selling ringtones through a misleading subscription model similar to JustFab's. Their success soon led to a lot of imitators in the US market.

Remember those "Text 53646 to this number to get your NSync ringtone!!!" ads that used to blanket the airwaves? Texting that number got you your ringtone; it also got you $20/month in recurring fees that you were unaware of, because you were a kid, and kids usually don't read their own bills.

There may be nothing technically "illegal" about this dark pattern in the US, but the pattern is pretty fucking dark. And it's usually self-defeating in the long run. Many of the fly-by-night ringtone peddlers of the early 2000s had to flee one country after the next, always on the hunt for new suckers in new territories, always getting chased out of town by an angry mob of pissed-off customers.

This model is not sustainable. Though it's not a pyramid scheme, it shares a similar need for a fresh supply of new victims at a rate fast enough to make up for churn. Churn starts to reach critical mass at some point, forcing the company to expand into a new country altogether.

Then again, sometimes the model just works. Look at GoDaddy. As far as I can tell, they make a shit-ton of money by making it as hard as humanly possible to break subscriptions and hidden upsells, and sadly, they're still around.

you know who the most successful company was in this "ringtone" bussines? Jamba, who was Jamba? the Samwer Brothers. just saying.
Pre-internet, or at least widespread internet, there was a similar scam with music tapes, perhaps records even before that. The first month's selections would be free but for shipping, after that they would send you a bunch every month if they didn't hear from you — and a hefty bill.
Yes, it's called "negative-option billing," as practiced by the Book of the Month Club, Columbia Record Club, and others.
buying stamps "on approval" predates all of these
One major difference: JustFab doesn't send you anything, they give you "credits" which can be used for future purchase but most likely got un-noticed and un-used.
So at worst, you'd pay for one item in the first cycle and then cancel.

Also, at least back in the bad old days when I was at various times a "member" of various book clubs, you got sent a bill, the company didn't charge you and enjoy float (interest earned on bank balances) like this company does on their credits.

And those credits will evaporate if/when this company goes out of business.

"Credits" are in some cases more insidious than being sent physical goods, because they're less likely to be noticed. And you'd be surprised how slowly most people realize they've been subscribed to a service -- especially if the billing for that service is under a fairly innocuous-sounding or obscure name (as is often the case).

Paying "for one item in the first cycle and then cancel[ing]" is probably an exceptional case. I'd be willing to bet that it takes most users two, maybe even three or more cycles to notice and then break the subscription.

JustFab is not a scam company...but they are using DARK DESIGN PATTERNS

So...they are a scam company? Just because you can try to apply different terms to describe their business, doesn't mean they aren't operating mainly to scam people out of their money.

Let's not whitewash it though. That's a scam.
Nah, it is a scam company as this type of activity is considered fraudulent in any sane country. Please edit your post accordingly.
It seems to me that calling scam behavior a "dark design pattern" just serves to legitimize it.
You just perfectly described a scam.
The image you posted is way more informative than this one:

Which one is most recent?

Frankly I don't see the difference between a company that tricks people into expensive recurring charges and a scam.

come on, it's a scam, pure and simple
Wait, so actually OP's girlfriend would have had a bunch of credits they could spend on items, right?
No, seriously, if your business model is based on tricking people into paying for something they don’t want and didn’t realize they’re getting charged for (no meeting of the minds), your company is a scam company. It’s simple as that.
exactly, a business model that tricks customers is not a business model, it is a scam and fraudulent
It really depends upon the percentages. From my experiences pretty much any website will trick someone, regardless of your intentions otherwise.

Despite having to: choose a price, click a "I agree to be charged $<dollar amount> every month" checkbox, and enter their credit card information, a subscription-based website I used to run still got emails from people complaining, saying that they didn't want us to charge them yet.

It depends more on intent than percentages. If you set out to mislead and only a few people fall for it, then it is still a scam, albeit not a very effective one.

On the other hand, if your intentions are honest, but a ton of people are confused, then it is not a scam. It is just poor execution. Of course, if you become aware of this but do nothing to address it, then your intent becomes questionable.

Agreed. This was a standard practice in adult and it got so out of hand, people signing up for a $2 day trial but their credit cards were getting banged for hundreds of dollars because they didn't scroll down past a whole windows worth of blank space to uncheck that "Sign me up for full memberships to these 3525 other sites too for $39.95/mo" pre-checked cross-sale, causing Visa and Master Card to actually step in and take action.

It's straight up scam-tactics.

> It sets such a terrible precedent that a company like this can raise so much money.

But so many precedents already exist. It's this wicked modus operandi for fast growth that a hefty amount of startups go by. You trick users in early in the game (browse around for examples), when you have attained a large group of users, you distance yourself from your past activities. Common excuses range from "we were young then!" to "we didn't think we were actually misleading users because of this $xyz technically". And of course there's the plain refusal of accepting that any misdeed was ever done.

A lot of the established companies right now grew and expanded precisely because of the dark models. I mean, hey, they just work. My favorite example: What's really great is FB can make a great fuss that they're better about the interface now... and the thing is, it doesn't matter. It's a large company, it's established, now it'll continue to sustain itself in a network effects fashion with misleading interface or not. The lesson learned is, just do it, say sorry later. It's like when banks knowingly involved in predatory practices do the calculus about how much the fine for some wrong-doing might be, and usually going for it because the reward attained will be many times bigger than the fine they'll have to pay. Except in this case there is no fine, maybe just some pesky comments rebuking you in some forum.

This even serves as a part of the moat around a successful company. AirBnB spamming craigslist comes to mind as an example of this.
> They (the company and the investors) know obviously why the conversion rates are so high - Because they are very subtle-y scamming people.

Indeed. This website could be used as an example on

UPDATE: I have emailed Dark Patterns, I hope they feature it on their site.

This website should definitely be part of the Dark Patterns UX Library. It's sad how people compose UX (and investors invest in) shady user experiences designed to trick people.

While it might seem pretty clear to you, it isn't clear to most users. There is very little indication that you are getting the boots for $39.95 as a VIP member. In addition there is a SINGLE large action item; "continue checkout." There is a WALL of text to the right, with no real indication you need to read it (the whole "same font size") In addition there is an action item "No thanks" which doesn't even look like something that is actionable. You claim that it is in plain English, but it actually isn't. The plain English says "Continue Checkout." Not "Continue Checkout as a new VIP member" There is an indication you are activating your VIP membership. But this wording is placed as far from the "continue checkout" button as possible. There was a recent article on HN about Dark Patterns and this page uses some of those dark patterns.
The screenshot you provided is a prime example of a "dark pattern" where the process is designed to trick the user.

Actually, this scam deserves to be there... Has anyone mailed it to them yet?
I have.
The checkout page you links deserves a notable place within Dark Patterns. (

But lets break down exactly why that page is designed to trick a significant number of customers into purchasing something that they did not intend to purchase: * All the VIP information appears in a box that most shopping websites use for advertising or shipping FAQs. Many average readers (note that I do not say all), would fail to even glance at that box. * The 'checkout as regular member' button is in a completely different flow of the page, and of a different size. On the one hand, it would probably be ok for one of those two things to be true, but to make them both true they know that some customers will simply think there is only one checkout option when they scan the page. * By far the most scammy part of the page is the fact that the payment does not appear in the shopping cart or subtotal box! People reasonably expect that with online shopping, everything they will be paying for appears in the shopping cart.

Many of those things on their own would probably be ok. You could just claim that it is a page optimised for converting to the VIP program. However, the combination of all those things are designed not only to convert people legitimately interested in the VIP program, but to also convert a sizeable number of people that skim read the checkout page.

"With this purchase, you will be activating your VIP membership" is in the sidebar. It's at body-copy writing, and you have to go looking for it.

This is a combined "Forced Continuity" and "Sneak into Basket" set of dark patterns that doesn't even given you the in-line checkbox to uncheck:



I was with you up until the screenshot - that's a completely disingenuous way to present some pretty vital information. It's a textbook example of a dark pattern[1].


The more common scenario is tricking you into clicking the wrong link or even downloading something. Softpedia's download page[1] is a perfect example of this.


Time to submit that dark UX pattern: :)
Could be something to report to [email protected]
It's funny, as I was going through the site I found the following issue:


Next is listed first, followed by Previous.

On the "previous page":

It is listed as such Previous followed by Next. So when I got to the Disguied Ads page, I clicked the last link on the page expecting it to take me to the next page, instead it took me back to the previous page!

An accept/decline button on installing additional software is just as bad as a pre-ticked box.

If you've ever done user studies with people installing software you'll notice 90% click the next button until it's done without reading the pages

The way that the accept button is positioned in these "optional" offers makes it look like you have to click it to proceed. This is exactly what a dark pattern is ( or

I downloaded Filezilla from Sourceforge to see how this offer system is implemented - From quickly glancing at the window it looks like accept is the only valid way of continuing with the installer. Furthermore the program installs Hotspot Shield which will constantly show the user ads after it's installed, I doubt even 1% of the people installing Hotspot shield through this offer want it on their PC.

It's not just as bad, i would call it worse than a pre-ticked box. As you say accept or abort everything looks like the only two options. There is no decline and continue option.
I doubt even 1% of the people installing Hotspot shield through this offer want it on their PC.

No, nobody wants it, that's why it's there. Your optimism presupposes that there has ever been a such thing as a person who wouldn't mind more commercials on TV. These people aren't out there, people only tolerate the advertising they do see, and software like this only exists to inflict itself on the user.

I have a technically literate friend who chose not to install Adblock in their web browser because they find it entertaining to see ads on the web while they browse. There's probably lots of people who enjoy these things.
Do they search out more ads, and install more sources of advertising, no matter what they are? There's a difference between liking interesting commercials as a distraction or hobby, and having an appetite for advertising in general.

In other words, Nike and Budweiser don't advertise on Sourceforge crapware toolbars, but have you suggested to him that he might want to install that kind of thing?

One thing that unnerves me about A/B testing is that you can't test for evilness. How much of this glossary[1] do you think is intentionally designed? I doubt that all of these changes were intentional, it's just that those changes showed objective results.

A/B testing can't test for morality, and you may very well be implying something with your B design that you didn't mean to imply, which none the less rates higher in your test.

In a business setting, it becomes awfully hard to argue morality against objective numbers. It's hard to do that anyway. Many business operate with a profit first motive. So once a dark pattern is in, how are you going to get it out?

Don't get me wrong, A/B testing is a tool, and like all good tools it can be used for good or evil. It just worries me that A/B testing, despite good intentions, can lead to evil results. I don't see anything on this page about how to avoid evil results.


I can see how you don't want to be in a situation where you are holding in your hand evidence that Shady Marketing Tactic X is proven to convert better. But I think the solution is simple: Don't A/B test shady features. If something makes you uncomfortable or seems like a dark pattern, don't test it, because you would never want it on your site anyway.

I don't think this is a failing of A/B testing. That someone can get numbers that show "evil" tactics can make money is not materially different from someone coding "evil" HTML in Notepad and getting results. As you said, it is just a tool.

The problem is you might not be aware you're making a shady feature. As an example, if you are making an ad for your app fooer. You A/B test some banners on an ad distribution network. You find that a simpler ad with a "download now" hyperlink graphic proves effective.

No big deal right? People are going to be seeing these banners on places like news sites right? It's not tricking people. It's also pretty reasonable right? People are used to clicking on hyperlinks, it overcomes the resistance to click on images. A reasonable move for a banner ad to make.

Except the ad distribution network serves content sites like softpedia as well.

You saw the spike because you emulated the download button right under the mirror list, and people were confused. They downloaded your application under the pretext of getting something else.

Or what about if I want a "Hey you should sign up for notifications for other job offerings in this field" message box? I'd love to have one of those in my application. I A/B test to see that this message box drastically increases signups.

Except it was because people thought it was a paywall.

The road to hell is paved with good intentions.

Consider this case about the Weebly blogging service:

They were A/B testing messaging for first week signups. This hyper aggressive and insane route was actually considered. Do you think they're black hat? I don't think so, but what if this had become standard practice?

For those wanting to read more about dark patterns in design..
This is generally called "Deceptive UX" or

“It’s difficult for me [or the other customers] to tell if they do this deliberatively to try to sell… or if they’re just careless with the way they’re designing things,” he said. “You need to see your product from the user’s standpoint.”

Mar 11, 2013 · xentronium on LinkedIn is a Virus
Good ole' forced continuity[1]. BTW, what OP describes is also a dark pattern[2]. I guess they use as their handbook.



The slide cast doesn't show up in mobile safari. But, there is a link on the page to a list with detailed descriptions and examples:
When we do it, it's sometimes evil - see

But we're not poisoning people, even in the worst case. All we can do is screw up the internet.

Food companies aren't poisoning people either. People are poisoning themselves.

You can make the case that these "poisonous" snack foods are so appealing because they are cheap, because the government only gives subsidies to agriculture used in making junk food, but that's a stretch & still assumes there are no other options (like eating healthier or eating less).

Your implied intellectual model here is that people are entirely in control of their actions and are (or at least can be) perfectly rational about them.

That's far from the truth. These junk food brands spend enormous amount of money on advertising, product placement, and other ways to influence consumers. Why? Because it works to manipulate them.

See, for example:

Or talk to anybody who works in mass-market advertising. Or hell, just visit Vegas. If people were rational actors, Las Vegas would still just be a place where the Hoover Dam workers lived.

I don't think it's fair to blame the user for that. This is a standard-looking login form that users will have seen hundreds or thousands of times before. You don't reinterpret the words on a login form every time you see a new one; you type in the stuff to log you in without really thinking about it.

Regardless of Spotify's intentions here, they're benefitting from users' trust in normal login processes to get Facebook account access. Lots of designs exploit users' automatic behaviors like that; see Dark Patterns [1].


> I can't find the link the anti-patterns video/site, I'm sure someone will know the one I am talking about.

This one?

Yeah, pretty much. One I saw was a webcast but discussing the same thing. It's pretty well publicized now.. at least in the HN community :) Thanks.
This is a bit of a UX Dark Pattern [1], isn't it? The percentage is not strictly lying, it just means something else than you'd expect.


Haha, fair point! I'm the droning Englishman giving the presentation. If you'd rather read than listen, check out the - it's a public wiki that I curate. Here's the definition of Dark Patterns from the site:

"Dark Patterns are User Interfaces that are designed to trick people. Normally when you think of bad design', you think of laziness or mistakes. These are known as design anti-patterns. Dark Patterns are different – they are not mistakes, they are carefully crafted with a solid understanding of human psychology, and they do not have the user’s interests in mind."

Hey Harry-

Just a quick note for your wiki, since I don't want to make Yet Another Account... on the Misdirection page:

You mention ExpertsExchange has now hidden their information, and give this page as an example:

But check this out: plug the URL into Google:

Click "cached":

And then scroooooooooll again like before. There's the answers!

They don't want to lose the Google juice, so they change the contents of the page so that Google sees it all, and regular people don't. You can also change your UA string to "Mozilla/5.0 (compatible; Googlebot/2.1; + and it works...

Ah, thanks for the explanation.

(And I apologise for the "droning" remark, I actually only listened to the first half sentence of what you had to say -- I wasn't expecting to hear a voice when I clicked on the "next slide" button -- so I'm not qualified to judge your speaking voice...)

Daniel, I'm sure the Dark Patterns Wiki community ( would love to read the details of the $10 recurring charge pattern you mentioned. So far there have been a few very satisfying examples where companies have responded to being named-and-shamed, and have taken dark patterns out of their UIs in response...
Are you interested in documenting offline dark patterns as well, for example those used in direct-mail or print/television advertising?

And, when a pattern refines or combines existing patterns, how do you prefer them be entered? For example, 'Hidden Costs' and 'Price Comparison Prevention' are often combined, lowering a advertised price or interest rate by moving charges into other later or less-prominently-displayed fees.

Accompanying website:

Beat me to it! There is also some decent coverage of Dark / Anti Patterns in the book "Designing Social Interfaces"
I wish they would of just linked to this site. as a side note, I hate slides because I have no idea of the slides context in the presentation. Usually during a presentation, a slide is only complementary to what the actual presenter is saying, so why present just the slides. Also google indexes slides so high in search. I've never found a any good content out of just slides.

Its extremely rare to see an online store that includes all taxes & fees in its line item prices. Such as hotel/air tax, sales tax, or shipping costs.

And when I don't see them in the checkout, I get suspicious. Shipping is never free, please let me know what I'm paying for it rather than lumping it into 1 price.

I call it Ticketmastering and I do my best to avoid any site that does this.
henrikschroder the US. In a bunch of other countries it's illegal for businesses to display prices without all applicable taxes.
What about fees only certain customers have to pay?

An example of this in the US is that companies don't know if you're going to pay their sales tax until they know what state you're in. Would they have to collect all required information before they display any products, (potentially being perceived as or display multiple prices?

It wouldn't be forced information disclosure if it is not forced. This seems like a straightforward, solvable interaction design problem to me.
In Sweden, the only real difference is between companies and persons, and the former don't pay VAT, but the latter group should. EU didn't complicate matters, you still pay VAT based on the country you buy from, not where you are located.

So if you make a print catalog and send it to companies, you can print prices without VAT. If you make the same and send to people, you have to include VAT. Same if you have a store, you have to include VAT in all prices.

Webshops usually have a simple global toggle somewhere on the site where you can choose to display prices with or without VAT.

In the US case, you could simply use GeoIP to guess the default and display all prices according to that state, and somewhere on the site have a "We think you are in <state>. Change?" and a dropdown.

It's really not rocket science, and the end result is very nice for consumers since the total you see is always the actual total, not a bullshit number that gets magically larger at the last step.

I'm not generally in favor of new laws but this should be illegal:
Doesn't every MMORPG (World of Warcraft, etc.) violate this? You generally get your first month free, but still need to setup an account with recurring billing. If you don't cancel you start getting charged.

I guess usually you have to buy the box though so its not quite the same. Unless they offer the game free for download.

I don't know how WoW phrases it, but the promise "buy a subscription and get one month free" is not the same in the mind of the customer as the promise "free one-month trial". The first gives a more accurate description of what will happen to your credit card.
Yeah, eFax got me with this one a little while ago. Always set a reminder to cancel your membership!
This exact same thing happened to me while subscribing with for a 1 year free domain name + 100mb disk space. At the end of the year, they started charging me, but I never noticed because the credit card I used when I subscribed was expired.

A few months later I got a letter from a bailiff (not sure of the translation: saying that I owed like 100$ to 1and1, then I got another letter, and another, I never replied and the story stopped here.

For what it's worth, pretty much all major credit card companies impose conditions on merchants who want to have delayed or recurring payments, which form the financial basis for this sort of dubious behaviour.

As a result, if someone is trying to pull a fast one like this, you probably don't need heavyweight legal action to defend yourself. The payment authority on your card will have expired when your card did and chances are that the merchant is required by their card processing service to get a new authority for any further payments. If they didn't, and they didn't contact you immediately but let the "debt" run up for a few months, that's basically their problem, and neither the credit card services nor the courts are likely to give them much sympathy. Whoever they sent after you is probably well aware of this, and gave up as soon as they realised the situation, particularly if they're only receiving a percentage of any money recovered as their fee.

I am not a lawyer, this is not legal advice, if you trust any legal comments you read on an Internet forum without independent verification then you're a fool, etc. :-)

Thanks for the reply, but this story in long over. This happened like 3 years ago, so I think they pretty much gave up by now.
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ [email protected]
;laksdfhjdhksalkfj more things ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.