HN Theater @HNTheaterMonth

The best talks and videos of Hacker News.

Hacker News Comments on
Joe Rogan Experience #1309 - Naval Ravikant

PowerfulJRE · Youtube · 26 HN points · 10 HN comments
HN Theater has aggregated all Hacker News stories and comments that mention PowerfulJRE's video "Joe Rogan Experience #1309 - Naval Ravikant".
Youtube Summary
Naval Ravikant is an entrepreneur and angel investor, a co-author of Venture Hacks, and a co-maintainer of AngelList.
HN Theater Rankings

Hacker News Stories and Comments

All the comments and stories posted to Hacker News that reference this video.
Naval Ravikant discusses this with Joe Rogan (episode #1309) at min 25:20-28:20 youtu.be/3qHkcs3kG44?t=1520

What do you think?

Naval Ravikant on Joe Rogan. From 2019, but without doubt the best podcast I listened this year.

https://www.youtube.com/watch?v=3qHkcs3kG44

Aug 29, 2020 · 19 points, 10 comments · submitted by simonebrunozzi
2OEH8eoCRo0
This one is gold. I don't even watch JRE but I've watched this one a few times.
systemvoltage
I have mixed opinions about Naval. On one side of the spectrum, he has such a great intellect to say very sensible things and on the other end he is one of the most arrogant people I’ve ever heard. I just wish he was humble and down to earth. Many people would be able to relate to him.
padiyar83
That's an interesting perspective. I have never found him arrogant. May be I have not watched the video clips or read tweets that you were exposed to. He does have a counter intuitive take on things, but don't think that come off as arrogance (to me). I have always appreciated the clarity of thinking he has on issues. The way he answers 5 "why"'s questions related to wealth and happiness amazes me. Have learnt so much from Naval's videos and tweets over the years.
systemvoltage
You're right - he has a great trove of knowledge, small tidbits of wisdom, some interesting and very abstract perspective of life, etc. He then combobulates it with how to get rich and other topics which are entirely different.

For example, he thinks that we should always be working for ourselves and building leverage. But ignores that most people actually don't want to run their own company and hire people. They're pretty content with their jobs and theyve got other priorities besides making money. He probably looks down on people like Chris McCandless (Into the Wild) who has a completely immaterialistic view of the world.

Great philosophers push their ideas without pulling down other's. Also humbleness comes from the ground up state of mind that our thoughts, no matter how convinced we're to ourselves, is not universally applicable and we should respect others who may not share the road to englightenment.

Furthermore, he looks down on things like:

- Audiobooks and speed reading. They're useless he claims because only the intellectuals actually read them and only then they can comprehend what's being said.

DeonPenny
I think he said what he says about building you own business because of how society has been shift to complain about their plights. People complain about education, college, and wealth gap in a way that almost assume that people are stealing from them.

If someone wants to work and these are things they bring up sure. But if they work a regular job yet, don't understand they have that option, but a job will never and should never give you a life that free of burden then he should actually explain to them how they should go about getting those things in a healthy society.

sqs
What makes you feel he is arrogant? He does not seem arrogant to me. He’s very direct and thinks from first principles (not from what authority figures say, which can come off to some people as not giving sufficient respect to those authority figures). He is very open to admitting when he’s wrong or uncertain, and I’m pretty sure he feels like he has accomplished only a tiny fraction of what he’d like to.
alphagrep12345
Why is being humble important? and a valued trait?
jonsno56
It’s a shame. He really did seem like a great guy and probably still has a lot of meaningful stuff to say. I think with the pandemic and the surge in his popularity combined, he didn’t know how to handle it and started acting immature on Twitter. (If you look at his tweets pre and post pandemic, they are very different. Pre pandemic, he seemed to stay truer to his principles of not wasting time on “status games” like social media fights). I guess the pandemic has really made (what seems like) even the best of us get a little more cuckoo
loquor
Could you please link to a few of these posts? Partly because it would be wrong to speak ill of someone without proof and also because I'm curious.
jonsno56
https://mobile.twitter.com/naval/status/1262182228148146178

Moreover, scroll through Naval’s twitter replies on May 17 and you’ll see that he trolls en masse a bunch of people who want to debate him regarding Canada.

He’s human, we’re all human, but you can imagine someone looking at his trolling spree and feeling a little sad, as I did. I used to admire him a lot, especially the way he tried to teach the importance of keeping your cool

I would say Naval Ravikant in Joe Rogan Experience [0]. A bit preachy (similar to Osho whom he quotes) but there is some things that are actionable and is an interesting point of view.

I'm catching up on Knowledge Project (which is quite a listen) so these are not from 2019, I just happened to listen to them last year. The ones that had the most impact are:

- Is Sugar Slowly Killing Us? My Conversation with Gary Taubes [1]. I got interested in this subject following the NYT expose on how sugar lobby shifted blame onto fat [2]. This builds up on that for me.

- Survival of the Kindest: Dacher Keltner Reveals the New Rules of Power [3] This may be a selection bias as I had a feeling "survival of the fittest" is perhaps productive in short run but would end with species of one. Imho, the way to succeed is to collaborate and share knowledge else we may never have survived hunter gathering phase. So this episode resonated with me.

A16Z's Incenting Innovation Inside Loonshots to Moonshots [4]. Having been through a significant cultural shift at my organization, the analogy of water freezing at 32F and hysteresis to explain how company culture could change dynamically or how in the same organization there are pockets that could be at two extremes on a spectrum, was quite spot on.

[0] https://www.youtube.com/watch?v=3qHkcs3kG44

[1] https://fs.blog/2017/11/gary-taubes-sugar/

[2] https://www.nytimes.com/2016/09/13/well/eat/how-the-sugar-in...

[3] https://fs.blog/2018/03/dacher-keltner-power/

[4] https://a16z.com/2019/03/24/loonshots-moonshots-incentives-o...

Someone a few days ago linked to this snippet of an interview with Naval Ravikant on Joe Rogan's podcast. Their lack of differentiating themselves from publishers has put them on a slippery slope

https://www.youtube.com/watch?t=3661&v=3qHkcs3kG44

dredmorbius
What this snippet misses is the fact that FB, YT, G+, and Twitter determine what is presented and offered to users. Algorithm or no, their fingers, hands, feet and bodies are all over the scale.
Oct 18, 2019 · throwaway_bad on Facebook and Speech
The comment thread is worth linking to: https://news.ycombinator.com/item?id=21275269

In particular the Naval Ravikant interview: https://www.youtube.com/watch?v=3qHkcs3kG44&t=3661

I personally think this is awesome. I don't wan't some git hosting startup to be the arbiter of morality for society. The engineers, designers, and PMs shouldn't have an outsized voice in society because they have a specialized useful skillset and ended up on a successful product.

If these users are breaking laws, then put them out of business via the courts and sieze the assets (the repos in this case) via legal means. Otherwise why would I wan't gitlab to have anything to do with this process?

The tech unicorns screwed themselves over BIG TIME, the second they stopped claiming they were just infrastructure and platforms and got into content moderation. They will now forever be a pawn of whoever has some power and has some agenda. It's an obviously unwinnable game for everyone involved besides maybe some politicians.

I don't want this to become a Joe Rogan debate but Naval Ravikant got this exactly right on his Rogan Interview: https://www.youtube.com/watch?v=3qHkcs3kG44&t=3661

anarchodev
> I don't wan't some git hosting startup to be the arbiter of morality for society. The engineers, designers, and PMs shouldn't have an outsized voice in society because they have a specialized useful skillset and ended up on a successful product

Refusing to serve a customer when you disagree with that customer's goal is pretty far from being a "moral arbiter for society." Keep in mind that anyone is free to use other services (or roll their own) and gitlab can't do anything about that. Neither would it be overreaching for the workers building that product to request a say in how it's used.

> If these users are breaking laws, then put them out of business via the courts and sieze the assets (the repos in this case) via legal means.

Refusing someone a service you provide is a completely legal action. This has never been illegal afaik. In many cases I can think of the users wouldn't be actually breaking any laws, which isn't the same as saying that their actions aren't immoral.

> The tech unicorns screwed themselves over BIG TIME, the second they stopped claiming they were just infrastructure and platforms and got into content moderation. They will now forever be a pawn of whoever has some power and has some agenda.

In this last sentence, who are you claiming has power? It seems to me if I had power I wouldn't bother trying to persuade my boss not to do business with certain agencies, I'd just make it illegal and force them to change. Does that seem like a course of action available to gitlab employees?

madrox
> Refusing to serve a customer when you disagree with that customer's goal is pretty far from being a "moral arbiter for society."

This sounds really good until someone refuses to bake a wedding cake for a couple because they're gay. The supreme court ruled in favor of the baker. You're advocating for a world where that's ok. Is that the world you want to live in?

brandmeyer
> This sounds really good until someone refuses to bake a wedding cake for a couple because they're gay. The supreme court ruled in favor of the baker. You're advocating for a world where that's ok.

That's not what the Supreme Court ruled. They ruled that the CO commissioners who decided against the baker in the first place were being blatantly flippant about the baker's religious beliefs. They should have at least considered those beliefs. They explicitly did not rule that the baker's religious beliefs were enough on their own to prevent the sale.

https://www.scotusblog.com/2018/06/opinion-analysis-court-ru...

eplanit
Suppose the custom cake bakery owner is black, and a customer asks for a cake with a confederate flag design? There's nothing illegal about the confederate flag, so the baker should be compelled to bake it, right? There isn't even a basis for refusal based on religious grounds.

I would want the black baker owner to be able to not only refuse to bake the cake, but to also tell the customer to take a hike and never return.

goatinaboat
Suppose the custom cake bakery owner is black, and a customer asks for a cake with a confederate flag design?

This is a fun game! Suppose the baker is Muslim and you ask for a cake with a cartoon of The Prophet? Wouldn’t that make you the bigot?

My point is you can contrive any example here to get the conclusion you want.

joshuamorton
> so the baker should be compelled to bake it, right? There isn't even a basis for refusal based on religious grounds.

No. Because "supports the confederacy" isn't a protected class.

As far as US law goes, you can generally refuse a service to anyone for any reason, unless that reason is related to a protected class to which the person belongs, unless you have some other exception (almost always religious) for not doing the thing.

belorn
As you say protected class is a very US specific, but even there it actually depend on the state as they can and in some places has extended it. Protected class under federal law is one list and protected class under state law is an other.

Protected class does also not mean that everyone else if a free target for discrimination, and for international companies there is the European Convention on Human Rights. Refusing service based on politics require the company to do a quite complicated dance around a long list of laws and I doubt any company lawyer would be very happy to give a green light for it.

bcrosby95
Same sex marriage wasn't even recognized in Colorado at the time. Your ire would be better directed elsewhere, such as the state where the action they wanted to celebrate wasn't legal. Especially a state where you can penalize someone for not baking a cake for an event that isn't legal in the first place.

The whole case was a ridiculous hit job.

wumpus
People buy wedding cakes for handfastings, renewals of wedding vows, and many other kinds of ceremonies somewhat like a wedding but not legally a wedding, such as the sham weddings that people hold after being married by a justice of the peace at a courthouse. People even buy them to use on stage during theater productions.

That's not actually at issue in the Colorado case. "I will not sell you that cake because you're gay" is the issue, not the type of cake.

bcrosby95
Supreme court decisions are surprisingly easy to read. You should spend 10-15 minutes informing yourself before spreading ignorance to people that don't know any better.
Bendingo
> "I will not sell you that cake because you're gay" is the issue, not the type of cake.

This contradicts everything I've read about the case. The baker disagreed with gay marriage, not being gay. To suggest that the baker wouldn't have sold any cake to a gay customer is ridiculous.

kelnos
I think these bakers suck, but I do agree with the Supreme Court. A private business should be allowed to refuse to serve customers for whatever reason they choose.

The only exception should be legal monopolies; they shouldn't be allowed to discriminate, because there are otherwise no other options.

radarsat1
> A private business should be allowed to refuse to serve customers for whatever reason they choose.

So you're fine with "whites only" signs in shops?

Hope that doesn't sound too extreme but while I understand where you're coming from I think it's important to acknowledge where the rhetoric about not distinguishing your customers based on their personal characteristics comes from.

kelnos
"Whites only" signs are illegal discrimination. I suppose I should have said, "for whatever legal reason they choose". If the state wishes to make discrimination based on sexual orientation illegal -- which I would support! -- then obviously things go differently.

There are a couple cases being heard by the Supreme Court right now that are attempting to argue that discrimination against gay and transgender people is inherently discrimination based on sex (which is a protected class everywhere in the US); I'm very interested in how that turns out.

mike00632
For the record, almost of all of the baking cases in the courts, including the famous Colorado baker case, are in states that have anti-discrimination laws for LGBT people. These cases are challenging the laws saying that it's the religious right to discriminate against LGBT people.
radarsat1
It seems a little circular to say that your argument is based on what is legal, when the discussion is about what should be legal.

Anyways I think your point is mainly that there is a line between what is required of a business and what constitutes a right to self-determination / self-expression. Where to draw that line is certainly not obvious and we understand new things about it as society progresses. So in that sense I do understand your point of view, and I happen to agree that certain decisions regarding what work they take should be allowed by a business, but I hope the point has been made that you have to be careful what you wish for when stating absolutes like "for whatever reason they choose."

dragonwriter
> I think these bakers suck, but I do agree with the Supreme Court. A private business should be allowed to refuse to serve customers for whatever reason they choose.

That's not what the Supreme Court found; it has not invalidated public accommodation anti-discrimination law in general, nor even the specific law the State relied on in the case. It did rule that the specific procedural history of the case indicated that State officials acted with specific targeted religious animus in the case, invalidating the state enforcement action even if the law was Constitutional and the enforcement factually warranted under it otherwise.

dmode
What about refusing to serve black people ? This was literally the premise of civil rights.
kelnos
Refusing to serve black people is illegal discrimination. It sucks that sexual orientation isn't a protected class in all states, but the bakers are legally clear here.
inlined
And the “but for” argument should have clearly applied here. They would not be denied a cake but for the gender (protected class) of who they were marrying. This was a clear case of judicial reinterpretation to meet an agenda.
greglindahl
Doesn't Colorado explicitly ban discrimination against gay people?
klyrs
30, if not 3 years ago, this wouldn't even be a question. If somebody argues that the refusal is based on religious grounds, it seems like it would be a relevant test case in US politics.
undersuit
OK, but are they a private business? Are they a small reservation only shop, preferably not located on commercially zoned land?
dTal
Hah, ninja'd. We wrote the same comment, except mine's longer.
dlp211
> I think these bakers suck, but I do agree with the Supreme Court. A private business should be allowed to refuse to serve customers for whatever reason they choose.

This was not the conclusion of the SCOTUS case.

It pushed the case back down to the State on procedural consideration.

We also have a bunch of reasons (protected classes) that a company cannot refuse to do business. Colorado specifically includes sexual orientation in their definition of protected classes.

kelnos
> We also have a bunch of reasons (protected classes) that a company cannot refuse to do business. Colorado specifically includes sexual orientation in their definition of protected classes.

I would absolutely support adding sexual orientation to that list in every state, or at the federal level. But if it's not there in the bakers' state, then, legally, they are (unfortunately) in the clear.

mike00632
Colorado does indeed have anti-discrimination laws to protect gay people which the baker was in clear violation of. The Supreme Court made a narrow ruling on a procedural matter.
squilliam
>We also have a bunch of reasons (protected classes) that a company cannot refuse to do business. Colorado specifically includes sexual orientation in their definition of protected classes.

I don't think this is relevant to this specific case. The baker didn't refuse to do business with the gay couple. He was happy to sell them a generic off-the-shelf wedding cake.

He refused to sell them a personalized cake, which is considered a form of expression. The government cannot compel you to express yourself a certain way if it goes against your religious beliefs.

pgcj_poster
> He was happy to sell them a generic off-the-shelf wedding cake.

If you're talking about the Masterpiece Cakeshop baker, he didn't say that until the lawsuit was in-progress. At the time that the couple went in, he refused to serve them without discussing what they wanted.

dlp211
But that was not determined by this SCOTUS case and remains an open question to the best of my knowledge.
RandomTisk
That's not what happened, the baker offered to sell them any cake he had. The couple also wanted him to write a message on the cake that was against his religious beliefs.
megous
Ever read ToC on free services? They have the right to terminate the service at any time for any reason. It doesn't matter if you've connected 100s of services to your free Google login, or keep thousands of mails and contacts in your gmail, or made a great business grabing eyes on youtube so that Google can sell ads on your videos and share a bit of profits with you.

For example today we've seen Google terminate a service of distributing an app on AppStore because it disagreed with a developer having a donation link, despite that not being excluded in ToS (or so I heard). And it doesn't really matter, how you company refuses a service. ToS just makes reasons for refusal codified and ToS can change at any time.

This is already a world we live in, SV companies practice this daily, and I doubt tech companies want this to change.

Moral arbiter is society at any rate. Tech workers are part of society, and their companies can't really make rules like you suggested (no service for gays) at scale without getting a major backlash from the public, or internally. Or if they could, it would indicate a much wider societal acceptance for such rules.

anarchodev
I'm not so worried about the supreme court ruling -- businesses don't usually succeed or fail based on the justices ruling that their business model isn't technically illegal. I want to live in a world where the community embraces good things, and actively rejects fascists and bigots whether or not it's legal to do so.
golergka
I'm completely on supreme court's side in the case of the baker (who are assholes but should have their legal right to be assholes). However, this analogy doesn't work with Facebook or Twitter as they have oligopoly status in terms of access to mass audience: there are hundreds of different bakeries that you can choose from, but there's just a couple of SV companies can very effectively silence you.

I still don't know what would be an effective solution to this problem, but giving these giants the same freedom to do whatever as a small business enjoys just feels wrong.

sunderw
Nobody here noticed the difference between "refusing to serve a customer" and "not wanting to to business with other unethical businesses". Gitlab could definitely chose with which other business they associate, and that would not mean allowing to target minorities. But every customer-facing business should not be allowed to refuse their service to anyone, because you're right, it would very easily lead to cases like the one you're describing.
cameronbrown
We also agreed not to discriminate service based on race. It's a similar matter but nobody thinks that was a good idea [to discriminate]. Freedom of association has limits.
tacocataco
I never understood why a gay couple wouldn't want to support a gay wedding cake baker.
dTal
Is this a rhetorical question? Yes, it is the world I want to live in. If the business operates within the context of healthy competition, then while they are free to refuse service, others are equally free to boycott it. This democratizes social norms. I don't want to force gun shops to sell to people they have a bad feeling about (say, because they have a racist tattoo), and I don't want a law that attempts to distinguish between "good" iffy feelings and "bad" ones.

If on the other hand the company is a monopoly and no reasonable alternatives are available, then it is de-facto infrastructure and should be more stringently regulated for impartiality, as is the government (and frankly such companies are a problem anyway, and should possibly be adopted by the government when they reach that size). The DMV isn't allowed to turn you away for a swastika tattoo, and neither should the electric company. But a baker? Absolutely.

growse
The problem with this seemingly simple philosophy is that it results in minorities being ostracised. "Democratising social norms" turns into "Outlawing anything the majority won't tolerate". Suddenly you have towns where black people can't live, because no-one will do business with them. Sure, "healthy competition" should solve this problem - there's money to be made! But people are not perfectly economically rational. We are tribal creatures of prejudice, predisposed to subconsciously reject and suspect anything or anyone as "different".

Laws are often made to protect minorities, because history shows us time and time again that an unchecked majority of people can be real dicks.

wolco
How far do you want to go? Should a company be forced to offer products to minorities? Say a comb maker who creates combs for the straight hair market should they be forced to offer hair picks? Not doing so leaves them ostracised.
asjw
As far as possible.

Of course companies have to offer products to minorities, have you ever seen parking spots for handicapped people in the mall?

Of course a restaurant has to ask you if you have some intolerance before serving you food that could contain allergens.

USA Better start to learn that 300 millions people don't make the entire world.

mijamo
In France it is straight illegal to refuse to sell to a customer without a legitimate reason. Whether a reason is legitimate is up to court interpretation but not liking the skin color or sexual orientation would not qualify. A racist tattoo at a shoot range would probably.

And it doesn't cause any trouble. B2B is not concerned obviously.

waterhouse
> Laws are often made to protect minorities, because history shows us time and time again that an unchecked majority of people can be real dicks.

Under democracy, a majority of people can decide what laws there are, and inflict their dickishness as they wish. Laws are often made to oppress minorities. For example: https://en.wikipedia.org/wiki/Jim_Crow_laws

See also part I of this essay: https://slatestarcodex.com/2018/02/21/current-affairs-some-p...

growse
While there are obvious exceptions, much of the time the majority has some empathy with traditionally oppressed minorities and votes for politicians who make laws to protect them.

> Under democracy, a majority of people can decide what laws there are

Also worth pointing out that under most democracies, this isn't true. The people decide which people should decide what laws there should be. It's an important difference.

tomnipotent
> Refusing to serve a customer when you disagree with that customer's goal is pretty far from being a "moral arbiter for society."

The civil rights movement and suffrage prove this is patently untrue. The biggest difference is that gender, religion and ethnicity are easy things to point out, while ideologic belief systems are tricky.

Where do you draw the line on what grounds a business can refuse service? Should I be able to refuse service to republicans? How about amputees? I hate the color yellow, so I'm not going to sell to anyone with an outfit on that has that color. I live in Los Angeles, where it's still plenty common to see signs saying "No Shoes, No Shirt, No Service" - which, btw, is not aimed at scantily clad beach goers but targets the poor and homeless population. Same with silly dress codes around baggy pants and hats - all clear examples of bullshit rules intended to single out an "undesirable" customer. Except it's not race/religions/gender so it's cool, right?

I'm in the camp that the reasons businesses should have to refuse service should be a whitelist, not a blacklist.

mytailorisrich
In some countries (e.g. France) it is illegal to refuse to sell to an individual bona fide customer. That's in addition to the anti-discrimination (race, gender, etc) laws we have in Europe.

So if you sell cakes for €10 and I show up and hand you €10 you are not legally allowed to refuse to sell me that cake. (heads have rolled for less...)

This is pretty effective at preventing discrimination that may otherwise be difficult to prove.

TeMPOraL
> It seems to me if I had power I wouldn't bother trying to persuade my boss not to do business with certain agencies, I'd just make it illegal and force them to change. Does that seem like a course of action available to gitlab employees?

You're thinking in terms of fighting the last war. Companies of today know how to work around things at risk becoming illegal. Today, a more effective strategy to persuade your boss about an issue is to spin it as something outrageous on social media, in hope that news portals will pick it up. This power is very much available to a Gitlab employee.

LeftHandPath
> Refusing to serve a customer when you disagree with that customer's goal is pretty far from being a "moral arbiter for society." Keep in mind that anyone is free to use other services (or roll their own) and gitlab can't do anything about that. Neither would it be overreaching for the workers building that product to request a say in how it's used.

I think we're getting to the core of it here. [The law's opinion isn't clear yet, but it appears to lean in this direction.](https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...) GitLab's decision may not be adequate - but it could help dodge a bullet or two while the law sorts itself out.

> "The problem with the whole 'activism' mindset is it doesn't actually target the people who created the problem, it just creates lots of noise – and the problem with noise is facts get lost," Fellows said.

The responsible thing to do is to refuse to play ball. If they claimed to be actively vetting content and something slipped through the cracks, it could be more legitimately portrayed as an endorsement than something that occurred with a Laissez-Faire policy in place.

anm89
>Refusing to serve a customer when you disagree with that customer's goal is pretty far from being a "moral arbiter for society."

This is similar rhetoric to what people would have used to justify segregated restaurants and schools. I'm not saying you sympathize with this but there is a reason why we don't want public businesses turning people away for political speech or other categories. It's not a nice place to end up as a society.

Gunax
> Refusing to serve a customer when you disagree with that customer's goal is pretty far from being a "moral arbiter for society." Keep in mind that anyone is free to use other services (or roll their own) and gitlab can't do anything about that. Neither would it be overreaching for the workers building that product to request a say in how it's used.

I think this is the crux of the issue. Anyone is of course free to roll their own GitLab (or facebook, or news channel). But this ignores reality.

Facebook control ~90% of social media. There's a very high chance that Facebook could sway every election in America (and lots of other countries also) if they truly wanted to.

I think we are entering a new era. Just as it required a paradigm shift to outlaw anti-competitive practices, I think we need to re-consider what rights these platforms have around speech.

yxhuvud
> Facebook control ~90% of social media.

Source? That doesn't really match the numbers I've seen. Especially not for younger people.

Gunax
Eh, I guess this is not true. To me, 'social media' always meant websites like MySpace and Orkut where friends connect, and not just 'website where users can communicate in some way' a la Pinterest/Reddit/YouTube.

But I realise now that isn't the right definition... still not sure what the word is for the Friendster/MySpace clones--but among those I am pretty sure Facebook is more than 90% of the market (outside of China), possibly more than 99% with Google+ closing.

Majestic121
Remember that Facebook owns Instagram and Whatsapp.

They might not be 90+ everywhere, but they do own a significant amount

UserIsUnused
Instagram and whatsapp are facebook. Ofc, even then 90% might be too much.
dredmorbius
By MAU, 2.3 billion vs 1.9 for YouTube (arguably not a messaging-based SM site), and 1.0 for Instagram (owned by FB). Qzone is 4th at 563 million, 17% of FB+Insta.

Not quite 90% by that measure, but close.

https://www.dreamgrow.com/top-15-most-popular-social-network...

Mindshare, reach, time-on-site, and media references are alternate measures. Definitions matter.

wolco
I didn't realize facebooi owned 90%. If you exclude twitter/reddit what's left snapchat?
hokumguru
Facebook owns Instagram - the 2nd largest social network - as well as Whatsapp which is ginormous in its own right as well.
keerthiko
I'd guess mostly WeChat/Alibaba/TikTok, and a few tiny players that ebb and flow in impact.
Gunax
Eh, I guess this is not true. To me, 'social media' always meant websites like MySpace and Orkut where friends connect, and not just 'website where users can communicate in some way' a la Pinterest/Reddit/YouTube.

But I realise now that isn't the right definition... still not sure what the word is for the Friendster/MySpace clones--but among those I am pretty sure Facebook is more than 90% of the market (outside of China), possibly more than 99% with Google+ closing.

XorNot
Ah the old "too big to fail" argument, alive and well.
akersten
Or, just break up the alleged monopoly under existing anti-trust regulation, instead of welcoming a terrifying new power of the government mandating that private business serve customers they disagree with?
asjw
The world has changed, USA are no longer the center of it, FB is not a giant if you compare it to WeChat

Now try to apply US anti trust laws to it.

skewart
How would you break up a giant social network? What lines would you cut it along?
akersten
I mean, I personally wouldn't. I don't think this is an actual problem, but in the face of a suggestion that "tech platforms are too big, ergo they must not be allowed a choice in who they do business with", I'll pick the "use anti-trust framework to make them not so big" option over "force a business transaction" any day.

Of course, those who make the "publisher or platform, pick one!" false dichotomy aren't really genuinely concerned about the size of the company. They just want to force someone to host their content, which is why they jump to "free speech means more nowadays than what it says in the constitution [so platforms must carry my speech]."

Sorry for the tangent, just want to make sure my position is clear.

skinkestek
> Of course, those who make the "publisher or platform, pick one!" false dichotomy aren't really genuinely concerned about the size of the company. They just want to force someone to host their content,

It is not nice to lump us all together. Many of us here both

- despise certain content

- while we still find it totally unacceptable that tech giants are allowed to do whatever they want with their power "because hate speech"

This is just a variant of introducing bad laws "because of terrorism":

The laws are bad not because anyone wants terrorism but because we don't want anybody to be punished without a good reason.

And today as tech giants wields more power than many courts or - in many but obviously not all ways - even small countries it might be time to make sure they have to be careful with that power.

this_was_posted
Force them to provide an openly accessible API to allow people to receive messages/event invites and send messages to the users of that platform. Then they can choose to use a different social network without having to give up their connections with people who haven't jumped ship. This way the advantage of the network effect of popular social networks will disappear.
majani
Telcos have shown that there are ways to maintain network effects in the face of federation: ridiculous connection fees for users outside of your network.
Aeolun
On the list of ‘terrifying new powers’ this one is waaay down. If the customer is not doing anything illegal you have no reason not to serve them.
akersten
Other than that pesky nebulous western value of "Freedom", sure, I guess you have no reason to be able to make that kind of business decision. Compelled speech or compelled production of value come in many disguises, and are hallmarks of oppressive and dangerous regimes. It's disingenuous to suggest that starting down that path would be innocent.
eanzenberg
But these things are already happening, such as forcing religious bakers to bake cakes for couples they don’t agree with
camel_Snake
Of course, because the reasoning wasn't 'you are a jerk' or 'we don't want to make that type of cake' or 'we're too busy right now' but rather 'you are gay'. Sexual orientation is a protected class and acting like a jerk on the internet isn't.

No one is forcing all businesses to do serve all customers everywhere always. We are saying you can't discriminate on the basis of marginalized groups.

Gunax
I lean libertarian, but sometimes realty breaks that.

True libertarianism dictates that I can ban whomever I want from my shop. But in reality most of the bans were of the 'No Negroes' variety. I would love to think that the free market would take care of discriminatory businesses, but history shows it will not.

I agree that the true issue is ultimately monopoly: if a town has 10 newspapers and one goes democrat-leaning, no one would really care. But it's different if there is only 1 newspaper.

I worry that there can only ever be a single Friendster/Google+/Myspace around, because people will always gravitate towards the most popular one.

agensaequivocum
Or the free market was hindered by the government's regulatory Jim Crow laws.
XorNot
If you see platforms being moderated, it's because the platforms want to survive. Because their usually is only one around, because once it becomes toxic the regular people flee and the toxic elements follow because having a platform is not the point for them.
graeme
You'd be surprised at what is "not illegal". Every community moderates.

You can make an argument that some sites have moderated too much, or been too overtly political.

But every community moderates, and has to. This can be demonstrated with a simple example. Guess what isn't illegal? Spam!

It should be obvious that FB and Twitter etc have to take spam down though.

Then there's abusive behaviour. Stalking, harassment, etc. Often not illegal, but hurts a platform. Better take that down.

What about propaganda? They should let it all go? That sure sounds like a....phone network. Actually wait, it sounds more like the news media, which has been regulated for decades. A zero moderation policy would likely have led to demands for regulation, too.

And then there's the massive category of topics which are not illegal, but horrifying. Have zero moderation, and you end up as 4chan or worse.

--------

I'm generally in favour of keeping politics out of things, but your "we take stuff down only for a court order" is a naive view. Literally every forum moderates, they have no choice.

quotemstr
> Spam

Spam, pornography, and profanity are not viewpoints. The problem with social media is not censorship of unwanted content that an ordinary person can identify without reference to viewpoint. What bothers civil libertarians, myself included, is censorship of specific points of view, even when these points of view are expressed in a calm and civil manner.

> Horrifying

Who gets to decide what's horrifying? You? Why?

XorNot
Ah yes, the civil point of view that the state should systemically exterminate entire groups of people. That someone should do something about those people. That critic.

But you know, totally fine if they're polite about it...

First, I must confess that over the past few years I have been gravely disappointed with the white moderate. I have almost reached the regrettable conclusion that the Negro’s great stumbling block in his stride toward freedom is not the White Citizen’s Council-er or the Ku Klux Klanner, but the white moderate, who is more devoted to “order” than to justice; who prefers a negative peace which is the absence of tension to a positive peace which is the presence of justice; who constantly says: “I agree with you in the goal you seek, but I cannot agree with your methods of direct action”; who paternalistically believes he can set the timetable for another man’s freedom; who lives by a mythical concept of time and who constantly advises the Negro to wait for a “more convenient season.” Shallow understanding from people of good will is more frustrating than absolute misunderstanding from people of ill will. Lukewarm acceptance is much more bewildering than outright rejection.

camel_Snake
> What bothers civil libertarians, myself included, is censorship of specific points of view, even when these points of view are expressed in a calm and civil manner.

Is this actually occurring? I haven't seen anyone get deplatformed because the company disagrees with the message, but rather because they violated ToS with regard to how they were communicating their message.

clairity
good point, but from a tactical perspective, not moderating might have made sense, to force lawmakers' hands in passing stronger laws about spam, harrassment, propaganda, etc.

they'd have a regulatory shield and probably retain carrier status, rather than sticking their necks out by selectively moderating content.

a moderate amount of moderation is my preference, but i see the value in freer speech zones on the internet (that i can mostly ignore).

AgentME
>good point, but from a tactical perspective, not moderating might have made sense, to force lawmakers' hands in passing stronger laws about spam, harrassment, propaganda, etc.

Why would companies find that preferable to moderating themselves? If the government makes laws prohibiting that stuff, then that means the companies could get in trouble if they let any of that through. If companies self-moderate, then there will be less pressure for laws like that to exist, and the companies won't get hit with penalties for missing a few things here and there.

IAmEveryone
The US government is practically incapable of taking on that role because that actually would make it a First Amendment issue.

But, as graeme has pointed out, this idea of any moderation increasing a platform's liability is as wrong as it is widespread.

graeme
They do have a regulatory shield though. Section 230 allows platforms to moderate their sites, while leaving them not liable for any content they do leave up. It's considered speech of the user.

This has come to a head as plarforms have grown larger. Note that even if some platforms were strictly neutral, they'd still be affected by the changes the government is proposing. It would have been very unlikely that none of the major platforms would have moderated beyond legal minimums.

https://en.m.wikipedia.org/wiki/Section_230_of_the_Communica...

clairity
i realize it's hard to draw a bright line on acceptible moderation, but then, is it fruitless to strengthen such laws?

seems like a lot of lawsuts and not a lot of legislating, from the wiki.

wpietri
"Don't discuss politics at work," is a political statement. It rules particular views in bounds and rules others out.

Similarly any business ends up making moral choices. It's unavoidable. To act in the world is to have impact. To consciously act is to intend particular impact. Commerce is inherently social; you choose to serve certain people in certain ways, with particular outcomes intended. These choices are inherently moral.

The notion that we should ignore that sometimes is of course a strongly political view. And when it pretends to be non-political, I think it's also incoherent.

Lutger
Good to hear a sane voice here. It's weird how people are able to convince themselves and others they can just not take a position. As if that is not an position in itself, usually supporting the status quo and conventional morally of the time.

I think people conflate 'I don't want to think about this' with 'I am neutral and not a party in this issue'.

eeZah7Ux
Spot on. This whole thread on HN reeks of "I was just following orders".
qmmmur
It is also incredibly luxurious to not engage with politics and the ramifications of our actions on both micro and macro- levels.
hellllllllooo
Thank you for writing what I was thinking so eloquently.

Everyone's life is effected by politics and GitLab is making a political decision as to which politics are acceptable to discuss at work. Politics will still be discussed, but only the politics that those deciding deem acceptable and this inherently political.

dcolkitt
You're stretching the definition of "political" to such a degree that it renders the word meaningless. You're basically claiming that any act that has any impact on anyone is political. That it's basically impossible for an organization to be non-political or even less political than any other organization.

And that simply flies in the face of common sense. Only the most hardcore post-modernist would deny that the NRA is a more political organization than the IETF. So obviously some dimension of "politicization" exists beyond just "acting in the world".

And if an organization wants to be less political, one way to do so is by relying less on political considerations when making decisions. With "political" simply meaning topics that are widely and commonly considered to be political. In the same way that the NRA is widely and commonly considered to be political in a way that the IETF is not.

You can debate about whether it actually is a lofty goal for an organization to strive to be non-political. Certainly there are many counter-arguments for why organizations should be more political. But I don't think you can deny that the dimension exists in some meaningful and actionable way unless you completely throw common sense out the window.

hellllllllooo
> it's basically impossible for an organization to be non-political

This is exactly the point. It's impossible for an org to be non-political. Any choice made has wider socitial context and effects for the workers at that company. Politics is basically the effect of decisions on people. Not all political decisions are as extreme and sensational as NRA etc. most are mundane but still important.

Your definition of this political spectrum seems to be focused on politics that generate controversy rather than their importance or number of people they effect. Mundane politics are also politics, you just don't necessarily notice them

GitLab has to define what are acceptable politics to discuss and draw that line. This is a decision that has to be made and will be based on their political views and what they think is normal or extreme which won't be the same for everyone who works there. The whole thing just seems very naive.

wpietri
I agree "politicalness" is a spectrum. I just disagree that it's possible to be anywhere close to zero if you are running a company.

The current American notion of "nonpolitical" is based on a post-WWII period where there was a broad consensus on a bunch of issues, and I think that consensus was in large part driven by the moral clarity provided by the war. But that started to decline circa 1980: https://xkcd.com/1127/large/

It was still propped up for a while by common interests between southern Democrats and non-northeastern Republicans. A big common interest being an opposition to civil rights. But most of the blue dog Democrats are now Republicans and liberal Republicans are basically extinct, so that middle ground has basically vanished.

For you "common sense" is doing a ton of work here, as is "widely considered". But as far as I can tell, it's tautological. You want your priors (or those of dominant groups) to be considered nonpolitical, while those of others to be political.

> claiming that any act that has any impact on anyone is political

What is politics if not people regulating the impacts we have on one another? Of societies deciding whose pain and whose gain is most important?

fake-name
> to strive to be non-political.

To strive to be "non-political" is an intensely political position. It's basically a complete support of the status quo.

To not take a stand on a position is as much of a position as taking a stand. We live in this world. It's literally impossible to disengage.

Not saying anything, or not disagreeing is as much a statement as saying something, or disagreeing. Neutrality is a political position. Deal with it.

----

The IETF is absolutely a political organization, but it's focus is in areas where there is much less discord and a general interest in actual practical problem solving, with hard, measurable outcomes, so there's much less room for gaming of facts. Go look at the discussion of any contentious RFC if you don't agree. Just because it's not advocating a extreme viewpoint that's completely divorced from reality (as the NRA is) doesn't make it "less political", it just makes it less contentious.

ajscanlan
> It's basically a complete support of the status quo.

I've seen many people spout this off as if it's self-evident, and I can't for the life of me understand why.

The status quo isn't some neutral, natural resting state that will continue on until otherwise affected, it needs to be _constantly_ maintained.

Being non-political doesn't maintain the status quo, because maintaining the status quo is an _active_ endeavour, in the same way a plane doesn't keep flying forever just because its engines turn off.

wpietri
This is true but misleading. The status quo is much less of an active endeavor than changing the status quo.

As an obvious examples, the American Revolution was clearly more active. And surely the revolutionaries were seen as more "political" than people who just wanted to go about their lives. Said lives of course including a variety of actions that directly or indirectly supported British rule.

Nasrudith
It fits as a "total war" mentality essentially. Delivering food to a city is supporting a siege. So is paying protection money supporting an armed group of criminals. Technically right but a matter of nuance. It may be effective in some cases but like nearly anything assuming automatic morality is a way to leap off the slippery slope.

Even the advocates don't take it literally usually because of how barking mad that would be in a "strangle infants for not supporting the cause" way. But that isn't special because any philsophy can be twisted into something horrific.

I suspect its popularity is more for the memetic effectiveness than a deep philosophy - regardless if it is right.

Lutger
You are mostly right, except that the actions of gitlab _are_ an active endeavour in various respects:

- putting a stop on political discussion at the workplace

- making it clear in the handbook as a matter of policy that gitlab will sell to whomever they legally can

- sending a clear signal that any time spend on considering not to do so and expressing an alternative opinion is seen as a violation of the values of gitlab and waste of time

- engaging in commercial activity with anybody, assuming you don't rip them off and they actually benefit from it, can also be seen as support in a very weak sense

However, 'support' also has the meaning of condoning. It's like a bystander not taking action when somebody is sexually harassed. You don't say he actively caused the harassment but his lack of action betrays an implicit support.

theon144
>I don't wan't some git hosting startup to be the arbiter of morality for society.

Me either! The point is everyone should be the arbiter of morality for society. Abstaining from moral judgements because it's "inefficient" is just alibistic, and leaves society worse as a whole. Tech companies are participating in society just as any other company, there's no reason they should be exempt from ethical considerations.

Technology is not neutral, never was, and pretending it is isn't going to change it - it's just going to pave the way for "inevitable" unethical programs and moral lapses (see: mass surveillance, Amazon->Ring cooperation, ICE cooperation...)

kortilla
Is it ethical to cut these same people off from food, etc as well when they haven’t violated the laws our society has decided should be followed?

Companies and people “being moral arbiters” of everyone they interact with is just idiots making knee jerk reactions based on other people’s perceptions and opinions.

An employee at a software company knows far to little to meaningfully judge if ICE should be helped. Are you even aware of the duties of the ICE?

gbanfalvi
> The engineers, designers, and PMs shouldn't have an outsized voice in society because they have a specialized useful skillset and ended up on a successful product.

They built these services. They are, ate least in part, responsible for how they work and the consequences of what they built. The content they’re serving and data has a meaning to us.

You can’t pretend that a social network doesn’t have a role in bringing anti-vaxxers together and enabling them to organize.

You can’t pretend that an advertising platform doesn’t have a role in distributing misleading information by business or other interests.

> If these users are breaking laws...

Laws aren’t the ultimate arbiter of everything. There’s a reason they change and evolve over time. Personal responsibility is not a law, yet we expect everyone to have some.

wolco
Social networks bring people together.

Advertising sends a message to groups of people.

Roads allow people to travel by car.

Roads are being uses to bring people to places where murders happen. You can't pretend that roads have no role.

So many things had a role.. power, internet, the company who made the computer they used, teachers, friends (or lack of), parents.. why decidd that social media or ads is the problem.

Social media has reduced the number of cults.

manicdee
And yet we have regulations about what you are allowed to do on public roads, such as which direction you travel, how fast you go, restrictions on open wheels versus having covers, sound pressure levels at various distances, visibility at night, appropriate indicators to allow other road users to be advised what you are planning to do, controls on emissions from vehicles, regulations about how close you can get to the car in front while moving, licenses allowing for drivers to be disqualified from driving, etc.

What similar controls do we have for social media? An institutionalised system for silencing people which is euphemistically called “suspending users reported for harassment” which is just a formalised lynch mob.

cloverich
A better analogy would be if you could pay the construction crews to organize all right-leaning but non-trump supporters down new, secret roads that only they see, and then to let you line it with propaganda that, again, only they see. (Replace right leaning and trump with whatever).
gbanfalvi
Come on, there’s a difference between building a service or a tool that ends up getting used for problematic purposes (ISIS used lots of Toyotas, is Toyota evil?) and knowingly allow and even even enable problematic purposes (imagine if Toyota recognized ISIS as a rising power in the Middle East and started selling them trucks).

We can always replace words in other peoples’ statements to make them sound silly, but I’ll roll with it.

What if these roads provided, as part of the government’s offering, rest stops and gathering points exclusively for murderers? — Just like how a social network allows problematic groups people to gather and enable them.

What if certain schools accepted money from anyone and in exchange taught kids any curriculum, including murder? — Just how some services bombard people with outright lies, because special interests pay them to.

If your service has a massive reach and influence you can’t just say “nope, we don’t do politics”. It’s simply not honest.

kortilla
Toyota does know their trucks are being sold to ISIS. Yet don’t do something drastic like banning all truck sales to the Middle East.
Nasrudith
So does that mean libraries should be shut down yesterday? There are chemistry books and outright army field manuals to do essentially exactly that - the status of war is the only thing which makes it not-murder.
jtms
I would advocate for Joe Rogan, he is a voice of openness and willingness to talk about things rather than just shutting down conversations because you might not agree. He is also a great listener. The world needs more people like Rogan.
anm89
Actually I generally agree with this. I think he has some distinct flaws too. I just didn't want this to become a referendum on Joe Rogan.
midnighttoker
You should shut down conversations with nazis and the far right figures he regularly has as guests. Joe Rogan platforms these people for profit. Joe Rogan is not a good person.
hayd
What "Nazis" and "far right" figures are you talking about? Can you give examples of both? Thanks.
Brakenshire
What he does is allow someone to explain their beliefs, but he makes no real effort to establish evidence to support or oppose that.

It’s actually perfectl content for the internet especially for the really out-there interviewees, because people inside the relevant filter bubble can watch and feel their beliefs are being validated, and someone outside can watch and be entertained by a freak show. Each person can watch the same content and be entertained for their own purposes. I’m not sure how healthy it is for society though.

pnw_hazor
Yes. I agree that companies shouldn't become adhoc moral arbiters. beholden to social media campaigns, or the like.

However, the bigger problem is that companies with international presence are going to have an increasingly difficult time navigating legal sanctions or directives from the various countries they may sell to or operate in.

The challenge comes when a company is faced with being closed out of a large market if they don't comply with another country's legal directives. While, the best/right answer might be to ignore those countries, the reality of having to withdraw from valuable markets would be challenging.

Though, while it may seem like new problem, non-digital industries have been dealing with this kind of thing for a long time. Maybe the WTO, or the like, will develop fair trade/IP regime to guard against enabling a country to block or sanction international digital/media companies for violating local laws. But I wouldn't expect anything like to happen soon because countries are not to keen on giving up sovereignty.

Maybe spontaneous digital trade wars will become a thing.

edit: typo as usual

BlueTemplar
That's what Google did with China.
xenocyon
Counterpoint: many tech workers want some agency over what they are building and don't want to be forced to work on nefarious applications. I don't seek an outsized claim on society but I do reserve the right to decide what my hands will - or won't - build.
LegitShady
nobody is forcing you to do anything - You're free to find a new job at any time, unless you're indentured and you didn't tell anyone.

If you want to decide what to build go into business for yourself. If you're working for someone else you have the choice to either work for them and do what they want or quit.

Dylan16807
A job is negotiated. You don't just accept or quit, you have a thousand different ways to come to a compromise with your employer about all sorts of things. It's baffling that so many people insist that the only feedback you can give your employer is quitting on the spot.
LegitShady
If you have an ethical or Maral issue with your work chances are you're doing the wrong work. You can of course try other Ave us but why do you think the business should compromise with a replaceable employee about his personal ethics?

That's not the way this works. He's free to find work that agrees with him or make his own work. But this "I want to work for you and decide what I work on" is a non starter for most people

Dylan16807
Do you say the same things to a union when they try to address things? "No, you shouldn't have any say over the wages, or the firing policy, or whether we supply dictators"? Because unions negotiate on major issues with companies all the time.

If you think unions shouldn't negotiate, you're awful.

If you're okay with unions doing negotiation, why shouldn't a lone employee be able to negotiate a little bit? Why is a group of employees protesting not treated like an impromptu union?

"You're small compared to the company, so you should have exactly zero influence, instead of influence proportional to your contributions" is something a petty tyrant says.

LegitShady
unions are about working conditions, not deciding what you get to work on. You're under the direction of your employer and their organizational structure.

Unions have exactly zero with choosing to do the work you want to do while getting paid for someone else.

Dylan16807
What jobs a company takes are part of working conditions. It's a valid thing to unions, or others, to negotiate over.

It's also pretty silly to say "Well if companies A, B, and C are hiring individually, the workers can all choose to work for A and B but not C. But if they all contract out to company X, then company X workers need to shut up and either work for all three or none. They can't even ask for things to change, they can only quit."

LegitShady
>What jobs a company takes are part of working conditions.

Again, you want to pick your own job, work for yourself or find the right employer.

Picking what you work on is not a reasonable expectation of any employee.

If nobody is doing C, start your own firm and get rich doing something no one is doing. Or perhaps you'll find out why no one is doing C. But forcing your employer to make bad business decisions under the guise of "working conditions" is a bridge too far.

Dylan16807
The employer is never forced to make bad business decisions. They get to decide what is better for the company: taking that contract, or having all the protesting employees continue to work for them.

Doing it your way can actually cause much worse business decisions. If employees don't give any verbal feedback, only quitting, then the employer might take a hot-button contract and suddenly have multiple important workers quit all at once, losing far more money to replace them than the contract was worth.

icelancer
Freedom of labor movement has never been higher. You have massive agency.
prepend
That’s why I like posts like this from GitLab. They are clear what their priorities are so you can choose what to build with your hands.

Lots of people don’t care or don’t have time to evaluate the thousands or millions of customers using software. The evaluation never ends so even if you shut off the really bad one, there will be another.

Picking a legal limit let’s me focus on other things.

RavingGoat
Gitlab is a common whore. They'll do whatever you want as long as you are paying.
prepend
Isn’t that why whores are awesome? Who is going to frequent whores who won’t do stuff?

Also, what’s so bad about common whores? Whores are people too, I try not to judge people based on their chosen profession.

brianpgordon
Companies do not magically become exempt from needing to behave ethically just because they're participants in the economy. If, for example, some government is committing human rights violations, companies have an ethical obligation not to enable it, even if the services they offer aren't under specific legal sanctions. This idea of a company just doing business with no social or ethical responsibility to anyone other than stockholders is an ancap fantasy.

And Gitlab isn't some unconcerned third party passing judgment on morality for the rest of us. We're not talking about your local Dairy Queen franchise gratuitously taking a position on Brexit for no reason. The only power Gitlab has is to withhold their own service. If you're going to argue that their service is so important for society to function that withholding that service constitutes being an "arbiter of morality for society," to the extent that's even true it only increases their responsibility to make sure its products aren't used for evil. A Dairy Queen franchisee doesn't have much power so they don't have much responsibility, but a Google or an IBM has enormous power to make a difference and therefore they have enormous responsibility.

I fundamentally don't agree that corporations should be given carte blanche to use any legal options to maximize profit with no regard to the consequences. The political process and legislation/regulation is probably a better way to address companies behaving unethically, but given that in many cases our political process has failed to put those laws in place we need to hold corporations accountable ourselves.

KirinDave
But don't you think individual employees have a responsibility to recognize when they're contributing to causes they don't like, and possibly resigning?

Doesn't this rule therefore forbid employees from discussing and acting collectively over this, which in the US is a federally guaranteed right?

burtonator
I wish it was that simple honestly.

If Google is doing business with nazis then as a consumer I have absolutely every right to boycott Google.

gknapp
In this case, it's likely that Nazis would be labeled a terrorist organization, or a wartime enemy, and would be subject to federally-enforced economic sanctions. It actually would be illegal for Google to do business with Nazis.
kevinmchugh
We're talking about Illinois Nazis, not 1945 Nazis. The bums who won their court case - it doesn't seem like the law is stopping them.
klyrs
I'm not so sure. During WW2, Nazis enjoyed strong political support from the American Right, and the discussion is known as The Great Debate. It was the ambush at Pearl Harbor, and not a clear moral consensus, which closed the debate.
war1025
Consumers have every right not to do business with someone for any reason they please.

Generally where things go into lawsuit territory is when a business refuses to provide service to someone based on something that has protected status.

dwild
> If these users are breaking laws, then put them out of business via the courts and sieze the assets (the repos in this case) via legal means.

What if it's legal where's it's happening? Let's say that China use Gitlab to host their repo for their software that track US citizens. Would you be happy with that?

> The tech unicorns screwed themselves over BIG TIME, the second they stopped claiming they were just infrastructure and platforms and got into content moderation.

That's absurd, it was always true for ANY industry. You can decide not to support a company by not buying there because you don't support their moral standards. That's perfectly fine. Would you still be with an ISP that is against net neutrality while there is another one that is for net neutrality? Now replace "net neutrality" with any other moral decision and it's still valid.

That's what make company decide to takes moral stands. It's purely economics.

> They will now forever be a pawn of whoever has some power and has some agenda.

That was always true and still is even with Gitlab decision. Even in this case, Gitlab decision makes them a pawn to whoever has the means to do business with them. They want to still be able to get the business of ICE, China, etc... and you.

philwelch
Assuming Gitlab is based in the United States, I wouldn’t expect them to unilaterally deny service to the Chinese communist regime. That’s a situation where I would advocate for Congress to impose economic sanctions.
gameswithgo
>The engineers, designers, and PMs shouldn't have an outsized voice in society because they have a specialized useful skillset and ended up on a successful product

So do you also feel that billionaires shouldn't have an outsized voice in society?

LegitShady
I had no idea who Naval Ravikant was until I randomly read one of his tweets and thought "this dude has good thinking" and went to look up him. Interesting character.
p4bl0
That's one part of it. The part were employee cannot discuss politics at work is very problematic however. What about unionizing?
journalctl
It is ostensibly illegal to prevent employees from unionizing in the United States.
p4bl0
Then the requirement to not discuss politics at work must be illegal too.
journalctl
No? You can’t forbid people from forming a union, but you can forbid things like arguing about the president.
p4bl0
The President, the Congress, the Senate, every political instance has a direct link to labor code for example. They can make or remove laws that have a direct impact on working conditions. Unions must be able to talk about all this.
drewbug01
Discussing things you may want to unionize over are often connected to the larger political realm, and cannot be severed.

Here’s an example: should a group of unionizing employees be able to say “vote for candidate foo, because they support legislation that aligns with our unions goals?”

Courts have said yes, and I agree.

gonational
Yes!

I hope more businesses start taking this stance (staying the heck out of the way of their customers' moral and political business); otherwise, we're looking at a future that's no different that what e.g., Chinese people deal with, except in our case the governing bodies are tech monopolies.

GitLab, take my wallet. I've had an account since the beginning, and I've wanted to start moving my private repos over from GitHub ever since MSFT bought it; this gives me the perfect excuse to get that done. Thank you, GitLab.

empath75
I, as a user, and as an employee am going to refuse to do business with nazis or do business with people who do business again. It’s a free country and I’m allowed to do that, and companies like GitLab can do whatever they want with that information.

I’m not the only person like me, and they’re going to have a choice as to whether they want to do business with nazis or so business with people like me.

They don’t get to just wave their hands and make these sorts of decisions go disappear. They have to make a choice.

doubleunplussed
If someone replies to this comment saying they are a Nazi, are you going to stop using hacker news? Perhaps you're restricting yourself to only paid services, but that still doesn't seem like a tenable plan. Nazis buy coffee from the same cafés as you, they ride the same trains as you.

If you think people should be removed from society altogether, you should vote to make what they're doing illegal so they can all be thrown in jail. Short of that, I don't think it's fair to try to prevent them from participating in society. Jail is the way we prevent people from participating in society, stopping them from using their bank accounts seems like a half-measure.

fake-name
> If someone replies to this comment saying they are a Nazi, are you going to stop using hacker news?

If someone from $ORGANIZATION made it clear that people that work for them and don't like Nazi's aren't welcome to be clear they don't like Nazis, I'd certainly consider dropping $ORGANIZATION.

> you should vote to make what they're doing illegal so they can all be thrown in jail.

What? There's a broad spectrum of things I personally don't like, or even think are odius that I'm not seeking to make illegal. The law is not a reflection of morality. I'm not seeking to impose my own value judgements on others (excepting murder, etc...), but the illegality of something has no substantial bearing on what I view as right.

What do you think the laws are for, out of curiosity? I assume you don't think they're supposed to reflect morality?

doubleunplussed
> If someone from $ORGANIZATION made it clear that people that work for them and don't like Nazi's aren't welcome to be clear they don't like Nazis, I'd certainly consider dropping $ORGANIZATION.

This is shifting the goalposts and not an answer to my question. Allowing Nazis does not imply disallowing anti-Nazis. You can allow both.

Laws are for lots of things, but they are for imposing morality, yes. I mean, I'm a consequentialist, so morality and "good outcomes" are synonymous for me. The law is to create incentives that make people cooperate in prisoners' dilemma type situations, at the societal level.

You personally avoiding Nazis is one thing - you are entitled to it and it has nothing to do with the law. However, if you are taking part in activism to exclude them from society, not just your part of society but payment systems, social media, things where them taking part has no effect on you, then I don't know why you don't go the whole way and make their views illegal. Also, being excluded from society in the ways that people think Nazis should be is a consequence so harsh that I would want it to have legal oversight. As the past has shown, "I don't like people x" plus mob rule is not a good way to decide people's fates.

Avoid them in your personal life, sure, but it sounds like you're talking about activism to exclude them from places where you could just not interact with them.

Just don't contribute to Nazi gitlab projects. If you're boycotting gitlab to try and get them to exclude Nazis, then I think you are using the private sector to enforce what you think should be a general rule, and I think companies and mobs are the wrong tool for the job. Take it to the voting booth, that is what democracy is for.

adamsea
To quote someone quoted in the article, "As a commenter identified as "casiotone" observed, "If your values aren't used to inform who you're doing business with, why do you bother pretending to have values at all? This [merge request] demonstrates that you don't have any values except 'we want to make money, and it doesn't matter who gets hurt.'""
prepend
Because values are useful for different things. I think casiotone is confusing lack of values with specific values. GitLab has values but they don’t include stopping specific companies or orgs from using their service, other than illegal stuff.

They even call out efficiency as a value. That’s not the same as “we want to make money.” An artist may value art more than relationships so they are a jerk to helpers.

This doesn’t mean the artist lacks values or that the artist wants to make money. It just shows specific values.

trickstra
> I think casiotone is confusing lack of values with specific values. GitLab has values...

Yes, as pointed out above, the value is demonstrated to be "we want to make money, and it doesn't matter who gets hurt". Casiotone didn't say they don't have any values at all.

fake-name
Saying "we don't care about anything but whether it's legal" is a value. Not deciding is a decision, here.
3xblah
Maybe they are not "infrastructure". Maybe they are actually middlemen.
dominotw
> The engineers, designers, and PMs shouldn't have an outsized voice in society because they have a specialized useful skillset and ended up on a successful product.

isn't that true for all voice in the soicety though.

mayneack
> The engineers, designers, and PMs shouldn't have an outsized voice in society because they have a specialized useful skillset and ended up on a successful product.

The size of every group's voice in society isn't a magical balance. I don't want rich people or corporations to have an outsized voice either. It's a zero sum game, so the only way to reduce that is to give more voice to other groups. Employees should try to leverage their numbers more, not less, because that's the only tool they have. Gitlab is based in San Francisco, so their votes probably don't matter either.

InfinityByTen
Finally I read a comment on a platform and I can listen to ideas and deep understanding. Thanks for sharing that interview!! You won Naval a fan already :)
jka
The trouble with libertarianism is, ultimately, that libertarians only care about themselves.

The kind of policy that GitLab is proposing here is fine - or even aspirational, under a certain lens - until one day you wake up and find that the same software is being used to destroy your community and compatriots.

How could that happen? Isn't this free speech to max utility? Isn't that flawless?

No, it's free utility to anyone - and you might eventually find that the people who are using it don't have you and your neighbours interests at heart.

The design of technology is _not_ ideologically neutral. It contains embedded biases and beliefs - even if they were subconscious or part of a community mindset - and those can and will be changed and overruled over time.

Careless licensing of technology will lead to the overriding and subversion of those ideologies, in ways that many of us would find abhorrent.

If you'd like to support that on the basis of free speech, you should ask yourself whether the worst re-applications of these technologies would still allow you to speak freely.

austincheney
> The trouble with libertarianism is, ultimately, that libertarians only care about themselves.

I am wondering on what you base this line of thinking? Do you have any examples where persons advocating for liberty are acting primarily out of self interest?

jka
You're right, this was an off-the-cuff generalization and wasn't well explained; here's roughly where I was coming from:

- GitLab is claiming to do business with any customer, on the basis of free trade and contribution

- GitLab is encouraging a culture where employees do not discuss or talk about those relationships

- I believe that technology is not neutral and the behaviours we as software engineers define in code, which then may affect thousands or even millions, are a result of customer influence (product decisions, feature requests, pull/merge requests, etc)

The commit[0] I read describes this as inclusion and 'efficiency' (i.e. 'yours not to question why'), but the real outcome I foresee is: "we are going to do business with shady companies and/or countries, and we want to encourage a company culture where this is seen as fine and dissent is discouraged"

This is self-interest on the part of GitLab the corporation - and ironically it also reduces liberty and freedom for GitLab employees to speak openly.

Meanwhile I can understand the top-level comment's description of why this might in some sense be a good business strategy. There's not a whole lot of discussion around the negative externalities of being successful in a way that compromises values though.

[0] - https://gitlab.com/gitlab-com/www-gitlab-com/commit/b5a35716...

None
None
flukus
I always thought relying on companies to enforce morality was far more libertarian, their ability to do so relies solely on companies having freedom of association. The result is it limits free speech to anyone not causing bad publicity for the company, the only care about themselves as you said. Last year it was neo nazis, this year it's anyone speaking ill of the CCP.

As a definitely not libertarian I think we should be placing much greater restrictions on companies ability to discriminate against their employees and customers.

friendlybus
You say that like the rest of the gov does not exist. The mere knowledge that a dangerous program exists can be reported to the FBI with all relevant information without Github having to run a gestapo firewall on all content it hosts.
jka
Can everyone in the world appeal to the FBI?
friendlybus
Yes? No? I don't know what happens when an Iranian sees a hostile app on Github to his interests and then reports it to the FBI. It'll probably work for countries close to the US. Microsoft will probably make relevant info available to the UK, but obviously not Iran.

I mean if your qualifying rule is everyone in the world, you've given Github the job of being world police. A role that won't make an earth shattering impact because bad actors can make those same programs offline. Why not have it in the light where we can see it coming? Being world police is the gov's job. Github should be treated as an american service and be required to respect only the gov's the US cares about and let the rest of the world use it on an "as is" basis.

jka
I agree with you that policing all the content on a large user-driven community site is untenable.

My point - and I didn't make this clearly - is about the relationship between the company (GitLab) and the customers it chooses to do business with (think: aggressive undemocratic countries and the companies within them that enable oppression).

If you live in one of those countries and find that you are being stepped on by tools enabled and built by a U.S. company, then you're going to question the ethics and intentions of that company.

Until now GitLab has fostered a very positive open source ecosystem, making their own core product available for modification under the MIT license. That's fine, and that seems a principled and ethical stand.

It's still possible that their code will be deployed and used for malign purposes; those are the cases where as a victim you'd contact your (local) law enforcement if there's anything clearly illegal.

What seems a lot more questionable here is who GitLab chooses to peer with to profit from, and to allow influence over the direction of the product and the code itself. That is the change which is signaled in this policy shift - it is that GitLab will do business with anyone, and that employees shouldn't discuss that - indeed they are being told to treat it as some kind of virtue.

GoblinSlayer
When USA government deems undemocratic countries problematic enough, it issues a ban on deals with them: https://dailycaller.com/2019/08/06/trump-executive-order-emb...
kissgyorgy
You are the kind of guy who would see a murder, would says "Not my problem." Not great. Everyone should be the moral compass of everyone else. The society nowadays seems fucking terrible.
ProAm
While I largely agree, history has shown this wont fair well for humanity either. Look at IBM and the Holocaust [1]

[1] https://en.wikipedia.org/wiki/IBM_and_the_Holocaust

eeZah7Ux
The flurry of downvotes is telling.
V-2
That comment didn't exactly add much value to discussion, given that this exact (and fairly obvious) example is prominently featured in the article, under a headline "Historical precedent".

> If you can see how people might respond to IBM, infamous for providing technology that helped the Nazis in World War II [...]

ProAm
> The flurry of downvotes is telling.

Im not sure I understand the comment?

brnt
'Arbiters of morality of society' is only possible because American capitalism is rigged to favor gravitation towards one or a few consolidated entities. This need not be. When Americans ask, where's Europe's Google, I smile, because the landscape, for now, seems to prevent consolidation into a handful of olicharch, and therefore no one company can arbit anything. There's always loads of choice, or there should be.

Next, the bystander effect. Is it commendable a company watches as bad actors use its platform up until a court ruling puts an end to it/ Another area where Europe has history and Americans would do well to learn.

vlunkr
> where's Europe's Google

Europe's Google is Google. Everyone there uses it, so why would you need your own?

neonate
Wow, that is such a clear analysis that someone should transcribe it. Here's the main part of what Naval says:

"If Google, Facebook, and Twitter had been smart about this, they would not have picked sides. They would have said "We're publishers. Whatever goes through our pipes goes through our pipes. If it's illegal, we'll take it down. Give us a court order. Otherwise we don't touch it." It's like the phone company. If I call you up and I say something horrible to you on the phone, the phone company doesn't get in trouble. But the moment they started taking stuff down that wasn't illegal because somebody screamed, they basically lost their right to be viewed as a carrier. And now all of a sudden they've taken on a liability. They're sliding down this slippery slope into ruin, where the left wants them to take down the right, the right wants them to take down the left, and now they have no more friends, they have no allies. Traditionally the libertarian-leaning Republicans and Democrats would have stood up in principle for the common carriers, but now they won't. So my guess is, as soon as Congress (this day is coming if not already here...)... the day is coming when the politicians realize that these social media platforms are picking the next president, the next congressman. They're literally picking, and they have the power to pick, so they will be controlled by the government."

nostrademons
They did this for many years - when I moved to Silicon Valley in 2009 tech's attitude was "We just provide the mechanism, we let our users determine policy". My first couple years at Google (09-10), management was very insistent that "We don't take sides - we show the user the best result for their query regardless of how much we dislike it." There were examples like literal searches for Nazis where the top result was a hate group and rather than censor it, Google took out a house ad (basically paid themselves, using the mechanisms available to the general public) saying "Don't like these results? We don't either, but we believe strongly in freedom of speech, and so we're using this space to explain our views without depriving others of theirs."

The thing is, sometime around 2012-2014 this became unacceptable. Public discourse shifted into "silence is complicitness - if you allow this speech on your platform, that means that you endorse it." So it became a forced-choice issue. And the thing is that if it's a forced choice, very few people are going to stand up and defend Nazis. Most of these tech execs personally hold fairly liberal, progressive views. When the general public said "You're a Nazi sympathizer because you show Nazi websites in the search results", a.) it wasn't true and b.) it was personally insulting, because that was essentially the opposite of what actually was true.

And then once that started happening, you run into the age-old problem with restricting freedom of speech, which is that the people who hold the power to restrict it might not agree with you on what's worth restricting. Be careful what you wish for.

neonate
> Public discourse shifted into "silence is complicitness - if you allow this speech on your platform, that means that you endorse it."

Public discourse? Or the loudest segment of the discourse most influential on Googlers?

nostrademons
The loudest segment of the discourse most influential on Googlers, but this caveat is usually inherent when we talk about "public discourse". (If you've studied Foucault, you'll recognize that power and influence are inherently tied in with the concept of discourse. Public speech happens all the time, but it only gets labeled "discourse" when it influences or is intended to influence society in some way.)
neonate
The way you put it above seemed to imply the discourse in society as a whole, not a narrow and unrepresentative segment of it.
tareqak
TL;DR the exercise of advertiser preferences not the shift in public discourse is the true cause.

I agree with your account in every way, but one and I recognize that I might be wrong about it.

In my opinion, it was not public discourse that shifted first. Instead, in my opinion, it was advertiser choice to not have their ads shown alongside certain content be it search results, YouTube videos, tweets, Facebook posts, or whatever else these tech companies were carrying.

I believe that ignoring advertiser preferences in this discussion is harmful in that ignores their impact on both business choices and public discourse such well-meaning commenters here end up talking past each other needlessly.

If we consider the role of advertisers in this discussion, then the comparison of these tech companies to telephone, radio, and newspapers becomes more interesting if not more apt. The advertisers don’t have any sort of explicit freedom of association given to them by the tech companies about what kinds of content they are allowed to associate with, but the point becomes moot with advertisers being customers allowed to take their business anywhere.

1. Taking the comparison of telephone, what if there was a phone service that provided free calls with the caveat that the caller or the receiver had to listen to an ad? What if that had was in some way, shape or form related to past or present telephone conversations? I would personally find this feature annoying to have some third-party delay my conversation, but I can see where this might be useful for people who would not otherwise be able to afford the service. I assume that most people who use telephones would feel similarly to myself.

2. The comparison to radio, newspapers, and television is less interesting in that both mediums are essentially curated, but both do run ads and/or sell subscriptions. Some of them even offer services that come about from gaining expertise in their domains or being in close proximity to content creators in their respective mediums (record labels, television and movie studios, recommendation sites, marketing firms, public relations experts, etc.), which in turn can be funded by subscriptions, ads, or investments.

Consequently, there are similarities between these media. First of all, these tech companies offer services that allow ordinary people to broadcast themselves to a wide audience much like radio or newspapers. Second of all, these tech companies make a majority of their incomes from a combination ads and subscriptions. Third of all, users of Facebook, Twitter, Google and the like have conversational expectations much like those of telephone conversations: even the people working in radio, newspapers, and television feel really strongly negative when any other party tries to change their message or silence them.

Like some of the other commenters here, I too feel that these tech companies have made a mistake in policing content the way they do. I believe that they did so to this degree wholly to appease advertisers. Had they been laser-focused on preserving the integrity of what their users posted before the demands their advertisers then things could have been very different.

If I was an advertiser and I only wanted to show ads to people who like cat pictures, but absolutely never show those ads to people like pictures of other animals even if there are pictures of cats in some of those pictures then these tech companies should build the tool for that.

In my opinion, these tech companies should be even more of a transparent marketplace between the ad buyers and content producers and the free association that both these groups are allowed to have with other and not police or manipulate content or that is permissible by law (keep things chronological with a user being able to keep a list of favourites). Then this whole discussion would boil down to which set of advertisers pulled sponsorship from which set of content producers and vice versa.

Yes, this is all easier said than done. Yes, the thing that it would all boil down to is potentially a problem in itself. Yes, these tech companies still have to follow laws that can be in conflict with laws elsewhere or some set of values in general. Yes, people would still take tech companies over something else related or not.

But, at least then, we wouldn’t we talking about this.

9HZZRfNlpR
There was a time when Jewish founders of Google strongly defended conspiracy websites coming as top results for the query 'jew'.
adamsea
If search engine results are a "public space", can you yell fire in a crowded theater?
mike00632
Google listed its own services high in search results in order to promote and show favor to those services. Why shouldn't people use this same logic to criticize Google for effectively promoting hate groups?
nostrademons
The thing is - it didn't, at least while I was there. The triggering of universal result groups (eg. Local, Shopping, or News results within the search results page) was based on the probability that for a given query, the user would end up wanting to switch over to the respective Google property rather than click on the top result. There was no manual tweaking of search results, and the formulas were so simple (basically straight ratios) that it's hard for them to be mucked with.
mike00632
What you say makes a lot of sense but a court ruled that Google's search ranks were self-promoting. I guess a good question here would be whether there were similar algorithms to funnel people towards E-Bay or other competitors' shopping services if they were likely looking for those.

https://techcrunch.com/2017/06/27/google-fined-e2-42bn-for-e...

But this is really a digression. The point is that search rank does promote. We can come up with many cases where Google perhaps shouldn't display the most relevant results. The question is whether Google should use their near-monopoly to influence the world via search results according to their moral code instead of strict legal responsibility.

colejohnson66
> What you say makes a lot of sense but a court ruled that Google's search ranks were self-promoting.

And over a hundred years ago, the Supreme Court ruled that “seperate but equal” was a valid argument. Doesn’t make it correct.

mike00632
Google was literally funneling people to its own services via its search results and had no similar linking for other services. The court's decision seemed reasonable.
Nasrudith
Just because a court says something doesn't mean it is reality. At all.

Likewise can we all please give the open farce of abuse of the word "monopoly" to rest?

mike00632
A court is a third party that is more objective than a Google employee. That is why I mentioned it.

Google is a great service (that I love and use daily by choice) but they also spend a lot of money to make sure they are the default search provider. They have more means than smaller search companies and they use those means to achieve almost total market share. Then Google used their vast market share to skew business in their favor. That is almost the textbook definition of monopolistic behavior.

natmaka
> Public discourse shifted into "silence is complicitness"

It leads to the "deplatforming" approach, i.e. not only refusing to provide a neutral publishing platform (or assimilated-related tools), but also to actively fight against those who do so.

'Be careful what you wish for', indeed.

TulliusCicero
The phone company analogy doesn't really work, because phone calls are by default private, between exactly two parties, usually just two people. If you look at the tech company equivalent of that, like FB messenger, or Hangouts, or other messaging systems, they behave more similarly to the phone company. It'd be difficult to get in trouble for something you said in a private message.

If you want the analogy to make sense, you'd have to use something that's more of a broadcasting platform, like TV or radio, or a magazine.

aianus
> If you look at the tech company equivalent of that, like FB messenger, or Hangouts, or other messaging systems, they behave more similarly to the phone company.

FB Messenger censors pornhub links or at least it did a few years ago.

pryce
The "we're not being political" approach also fails at an earlier hurdle; which is that the decision to not allow "political" discussion at work -and the decision to refuse to review taking business from hate groups are themselves inescapably political decisions.
m463
I think you're correct -- a phone call is one-to-one and the internet is a broadcast medium.

I believe radio (as in ham radio) actually has FCC rules regarding identification and profanity.

Reelin
It does, but as far as I understand that's only because EM spectrum is a very limited (as used in practice) public resource. The logic wasn't (many to one) -> (regulation), it was (limited resource) + (EM interference problems) -> (government controlled public space) -> (regulation). The US National Parks might be a good analogy here - very high traffic combined with limited space availability.

In particular, note that in the public resource analogy the regulation is quite neutral; legal censorship of profanity in the US is very narrow.

anm89
Disagree with this.

First off, in the world of VOIP and conference calls this isn't true anymore, and we still don't censor VOIP providers.

The point is you can choose to be a carrier or you can chose to be a service.

TheSpiceIsLife
We don’t need analogies.

It’s fairly clear how these organisations and their internet powered machination work.

So let’s talk about them in that context.

seamyb88
The phone analogy is terrible because the vitriol doesn't prevail. You aren't bombarded with 100 people's bigotry every time you pick up the phone.
shard972
lol, turn off ur phone
gd1
What about the postal service then? That is a common carrier. Never mind being bombarded with bigotry, people can bombard you with literal bombs using the postal service, but hey if some kids are shitposting memes on 8chan... well we have to shut down that service altogether.
Fnoord
Radio, TV, and newspaper are each a better analogy.

Postal service is more akin to the infrastructure of radio/TV/newspaper (which differs, and is rather not one but various, e.g. cable, FM, and paperboys).

The analogy with Internet would be transit providers. The undersea cables between US and EU, for example. These are heavily logged, e.g. by UK. Imagine all your post be scanned, that'd be GDR-esque.

clarkmoody
Perhaps if it's presented this way:

Group X that we don't like gets its phone service from AT&T. We know they're conducting their hateful business, fundraising, and coordination over AT&T's service. We need to put pressure on AT&T to cut off their service, or we'll assume that AT&T endorses their message.

TulliusCicero
The analogy still doesn't work. AT&T phone service is not a broadcast platform, and phone service is usually a government-granted monopoly that's highly regulated.

There are very obviously plenty of businesses where we'd expect people to cut off hateful organizations from service. I imagine most party caterers wouldn't want to host the KKK, and nobody would think this unreasonable.

eanzenberg
Which is absurd. Putting that kind of censorship control in the power of ATT is dangerous. That’s why we have laws and elected officials.
read_if_gay_
That's the point of the analogy. People keeep saying Twitter etc. are private companies and can do whatever they want apparently without realizing how ridiculous it all seems when applied to another business.
kelnos
The telephone analogy is terrible. Phone calls are usually between two people, or a small number of people. At the very least, they're bounded to some relatively small number.

When you post something on Twitter, unless your account is private, you've broadcasted your speech to anyone in the world who cares to see it.

Not moderating any speech at all is a quick way to allow a community to turn into a cesspit of spam and trolling. Look at any web board or forum over the past several decades for inspiration. They either have moderation, or they get overrun.

arkh
> They either have moderation, or they get overrun.

But those boards are moderated by people who are part of the community. Not by the software nor the software host.

Twitter, FB, Google are not part of those communities. If they want to be cesspit of spam and trolling they should be free to do so.

ericdykstra
Twitter isn’t a phone call, but it also isn’t a message board. You only see the tweets of the people you personally choose to follow; it doesn’t matter if there are 100 billion spam accounts posting ads to nobody.

There are millions of Twitter accounts in Japanese, but if you don’t follow any of them, and nobody you follow retweets them, you’d never know they exist. You can also block or mute specific accounts, mute keywords, etc for even finer control.

kelnos
That is a vast oversimplification of how Twitter works. People absolutely see tweets from people they don't follow, all the time. Sometimes it's a promoted tweet, sometimes a retweet, sometimes just by browsing through trending topics. It's pretty much impossible to use the platform in such a way that you'll only see tweets from people you follow, and nothing else.

Agreed that Twitter isn't a message board, but that's a much closer analogy than calling it a phone call.

GoblinSlayer
Imagine if those promoted tweets were delivered to you as automated phone calls from spoofed numbers.
fouc
> You only see the tweets of the people you personally choose to follow

If that was true, that would help a lot.

lovegoblin
> You only see the tweets of the people you personally choose to follow

Have you been on Twitter recently? If I don't consistently switch my feed to "latest" away from the default "home", I very consistently see tweets from people I don't follow that Twitter thinks I might find interesting or whatever (usually incorrectly).

Eleopteryx
>Twitter isn’t a phone call, but it also isn’t a message board.

Well, Twitter is Twitter. But Twitter is probably closer to a message board than it is to a phone call.

dragonwriter
> You only see the tweets of the people you personally choose to follow

That's not at all true by default; you also see tweets from people that have paid for involuntary reach for the tweet (promoted tweets), tweets that people you follow have interacted with, tweets that respond to or mention people you follow, and probably others that I'm forgetting.

ericdykstra
So Twitter broke their platform, and they show ads. This is vastly different from a message board that shows all messages to all users.
testis321
There should be only once choice, either you're a platform or a publisher.

A platform should be open, and only illegal stuff should be removed. No preferences, no nothing.

A publisher can be closed, can be whatever they want it, can filter whatever, promote whatever and censor whatever.

...but!

If you're a publisher, you've decided what to publish, and you should be responsible for all content published on your webpage. Fake news? Illegal porn? Nuclear bomb plans? Your problem, your responsiblity, you face the consequences.

If you want to be an open platform and blame users for content posted there (and have them face the consequences), then you should have no right to promote, censor or block/hide content (except illegal).

mike00632
The "illegal" criteria seems rational for countries that have rational laws but in many countries there are laws against political speech or criticism of people in power. By removing that 'illegal' speech in those countries the platform is in effect working as a government actor to police customers in those nations.
Reelin
In such nations (ex China), this entire debate is irrelevant - you follow their rules or they kick you out.

In western societies, this debate is relevant and I strongly side with the poster you replied to. That being said, reasonably unbiased filtering should probably be allowed in some form. For example, while I don't want YouTube picking sides politically, I also wouldn't want to force them to host hardcore pornography against their will.

jacquesm
'we're publishers' is exactly the wrong thing to say. Publishers have some responsibility over what they publish.

The smart thing to say would have been 'we're common carriers'. And then to actually behave like common carriers.. At least that would have gotten them to first base. Now they look like heavy handed censors with a stake in the game.

pas
They fucked up when money entered the picture. Naturally some companies don't want their ads next to ISIL/ISIS beheading/recruitment videos for some reason... and it's allegedly bad for growing the audience too.

The bottom line is, that G/FB/Tw did not remain neutral, because that hurt the aforementioned bottom line. Now, of course - as you said/transcribed - the problem is that if fascism becomes profitable then what?

mch82
Edit: I see the Naval quote is from the Joe Rogan podcast (which I often enjoy). I haven’t listened to the episode, so don’t have full context but Naval seems to have the timeline, “publisher”, and “platform” confused.

Facebook, Google (YouTube), and Twitter have historically claimed to be “neutral tech platforms” like phone utilities or internet service providers. They maintained that claim so that responsibility for content would be on the user/sharer and so they could avoid regulation.

More recently, at least Facebook has claimed (2018) it is a “publisher”. They updated their claim in order to justify the right to editorialize & choose what content people see on Facebook.

The shift to “publisher” aligns with the reality that the news feed algorithm is a form of editorializing. The algorithm decides what you see. It doesn’t matter if a human or a computer is making the decision.

manfredo
The news feed does not editorialize. It doesn't edit the content itself. It's automated curation.

Furthermore this whole "publisher" vs "platform" argument is not nearly as relevant as people make it seem. Contrary to popular belief, tech companies describing themselves as publishers instead of platforms does not affect the fact that they are not held liable for user generated content (so long as they meet reasonable standards for taking it down when notified). The New York Times is afforded the same protection for user generated content that they host (e.g. their comments section).

mch82
“Automated curation” is still editorializing. A group of people set the algorithm. Computers apply it. There is no difference, except the speed & consistency with which the task can be performed.
mch82
“Facebook, [company attorneys] repeatedly argued, is a publisher, and a company that makes editorial decisions, which are protected by the first amendment.” https://www.theguardian.com/technology/2018/jul/02/facebook-...
manfredo
...which doesn't at all affect their section 230 protections. Again, contrary to popular belief, whether or not a service considers itself a "publisher" or a "platform" has no bearing on its protections under section 230. It's all about whether the content was user generated, or written by the service itself.

Why people are dedicating so much attention to the fact that Facebook calls itself a publisher is a mystery to me.

See https://www.eff.org/issues/bloggers/legal/liability/230 for more details.

mch82
Thanks for that reference. Fan of EFF. I’ll check it out. The case history archive is a pretty cool resource.
Nasrudith
It seems to be someone's talking points or more charitably a meme (differentiated by how organic the idea is). The publisher/platform dichotomy is a flat out wrong myth which keeps on being repeated - and often from high places.

I would call it outright propaganda at this point because the dogged insistence on not listening to objective facts seems more "big lie" style opinion shaping than mere ignorance which has caught on. At this point I reflexive downvote all who repeat it uncritically as "fact".

angrygoat
A better analogy for Twitter and Facebook is the town square, rather than the phone company. They don't just enable point to point discussion, but broadcast to any and all who will listen.

Which is fine, and good, except when algorithms keyed only to engagement cause the noisiest, most provocative and potentially divisive content to be amplified.

We have, for a very long time, had laws concerning appropriate behaviour in a town square, or any other 'public' setting: don't incite riots, don't behave violently towards others, be respectful if someone doesn't want to talk and walks away. I support the efforts we're now seeing to bring those norms into online spaces.

mavhc
This is the actual thing about technology, amplification. One person has 1 voice, but then they buy a megaphone, or say something to get attention online and the algorithms are programmed to say that's good. What's picked to be amplified, or muted, is the issue.
viraptor
> We're publishers. Whatever goes through our pipes goes through our pipes. If it's illegal, we'll take it down.

That's a great way to lose users though. They need to provide some limit for abuse. That means either good moderation tools for end-user (which I don't see how they could scale without also providing bad-people-lists) or moderation on their side. Otherwise you end up with /b/ or worse.

If they want their users (and advertisers) to stay, they need to filter content. They're social networks, not limited-scope opt-in party lines.

ericdykstra
This sounds right, but what if the corporations are more powerful than the government? What if they have enough money to collectively rent enough politicians so that they never have to face the scrutiny that Naval is talking about here?

If social media platforms are picking the next congressmen and next president, and you’re in Congress vying to keep your seat, do you really want to introduce or sponsor legislation to upset these companies?

Alex3917
I mean you can say it's a clear analysis in the abstract, but if you look at the HN comments on the thread about CloudFlare cutting off 8chan it's clear that most people no longer agree with the principle as soon as they're presented with a real world example.

https://news.ycombinator.com/item?id=20610395

BurningFrog
> ...so they will be controlled by the government."

The cynical Regulatory Capture inspired take is that they want to be regulated now. Because regulation always favors the established players and makes it very hard for newcomers to overthrow them.

Google, FB etc are now big and old enough to need that protection, and settle in to comfortable corporate middle age.

ezoe
The problem is, the current law isn't match the pipe analogy. If it's a private communication, they can be a pipe. But they are publicly present the information as soon as it gets to the pipe.

There are many expressions you can't publicly present by law. For example, copyright infringement, privacy infringement, pornography, national/personal/organization secret, fraud, malware and all. You may argue some of them are ridiculous, but there are some expression you agree to be supressed. Like the fake news of you committing the crime.

Yes, In an ideal rule of the law world, you should take the court order to censor that expression. But it takes time. The current law is, if you knowingly publish the material that is obviously illegal and ignore the censorship demand, you are responsible for that.

positr0n
> "We're publishers. Whatever goes through our pipes goes through our pipes. If it's illegal, we'll take it down. Give us a court order. Otherwise we don't touch it."

It is not possible to do that and make as much money as google, fb, etc. The end result of unmoderated public places on the internet is always this: https://slatestarcodex.com/2019/02/22/rip-culture-war-thread...

Which is great! But not for making money. From that article:

> Allowing any aspect of your brand to come anywhere near something unpopular and taboo is like a giant Christmas present for people who hate you [...] It doesn’t matter if taboo material makes up 1% of your comment section; it will inevitably make up 100% of what people hear about your comment section and then of what people think is in your comment section.

prepend
> It is not possible to do that and make as much money as google, fb, etc.

Exactly. This is a choice that companies need to make. Or a regulation coming.

dragonwriter
A regulation is coming anyway; the CDA safe harbor, which is what both those complaining about what the big tech firms are censoring and those complaining about what they are not censoring are targeting, has been under broad, bipartisan attack for some time, under a variety of different pretexts. Pm
noelsusman
Google, Facebook, and Twitter are not "literally picking" the next president or the next congressman. That is delusional.
chabad360
Obviously they're not "literally" picking the next president, however they have full control on what opinions you see first, and they have utilized that ability to tilt many (yes, very many) people's opinions (especially those whose understanding of even general politics is superficial) towards a specific candidate.
Nasrudith
Under that logic we need to heavily regulate cereal boxes and milk cartons because many of us see that first thing every morning.

My philosophy is to always be suspicious of those grabbing at the keys to power - they are up to no good.

Animats
Worse, they don't want someone else doing their filtering. Try to write a Twitter or Facebook client with spam filtering. They don't want the user to have that kind of control.
oever
People working for Google, Facebook, and Twitter are smart. They understand this and have used the argument of being just infrastructure in the past.

They had to switch their story because they are not a neutral carrier. They prefer some messages over others. This is linked to their revenue model.

The simplest solution to 'just be infrastructure' is to show all messages linearly.

A more complex solution with possibly lower revenue is this: they could let the users pick algorithms to filter and sort the content, then they could claim to be infrastructure again. The user would need to be able to inspect the algorithms. The simplest one would be: no sorting, no filtering. That would make the sites unpleasant.

The smart people at Google, Facebook, and Twitter must have considered this option but decided that it is more profitable to share the power over what people see with the regulators than to share it with the people.

Jack5500
This has actually been the law in i.e. Germany and I think many other countries as well. As long as the platform doesn't interfere with the content (§ 8 Abs. 1 TMG) they are directly accountable for the content (Which doesn't mean that they don't have to take it down in case of an infringement for example). But as soon as they moderate said content they are legally liable. And this is exactly what social media platforms have done and that is one reason why they are responsible for their content now. This is of course only one part of the truth, because at the same time there has been a shift in perception of these platforms and other laws (i.e. Netzwerkdurchsetzungsgesetz) have been specifically aimed at social media platforms to hold them accountable.
TheOperator
It's inevitable these platforms will have government mandated restrictions on speech, as well as government mandates to NOT restrict speech. There's no other way to really manage the issue.

Whinging of the poorly informed that free speech protections can only apply to the government aside.

johnzhou
That's a great way to lose advertisers. Case in point: Youtube's demonetization policy.
hnuser54
Any particular advertiser needs Youtube more than Youtube needs the advertiser. Youtube has a critical hold on the young adult market in particular. If Youtube just said, "if you don't like it, Pepsi, someone else can enjoy this ad space instead", what recourse would they have? In conclusion Youtube demonetizes because they want to and need an excuse.
cameronbrown
If you look at the data, the "adpocalypse" cost alphabet over 70 billion dollars on their market cap.
flukus
It's also a great way to whitewash history. Want to educate the next generation about Hitlers atrocities? No money for you.

Basically anything war seems to be getting demonetized these days. Advertisers only want to appear next to cats on roombas.

o-__-o
I B M
drak0n1c
Advertisers didn't care for over a decade until partisan agitators such as Media Matters and Vox methodically cherry picked screenshots of brands next to maligned youtube channels, and then contacted brands threatening to publish names and shame if they didn't pull their advertising.

The result of this short-sighted crusade was Youtube becoming unprofitable for millions of small creators around the world, and people of all sides of politics suffering including non-political LGBT creators.

lackbeard
This is the first I’ve heard this hypothesis. Do you have links to any data?
padraic7a
Being a publisher might get you off in the States, but in a lot of countries publishers are held to be legally culpable for material they publish. So publishing nazis, online abuse, illegal content etc is a legal issue.
mch82
This is also true in the US. See my reply to the parent. The Naval quote is wrong.
anm89
He's exactly right on it. Facebook is not a court. If certain speech is illegal, how would they even decide? If certain speech is illegal, have a court label it illegal and send them an order to remove.

That concept exists within our system.

mch82
Naval is exactly wrong.

Facebook must choose between being a “platform” like a phone company and being a “publisher” like a newspaper or broadcaster.

The rules are different and the accountability is different, which is why Fb lawyers spent years and millions arguing for “platform” status.

Edit: my main goal for posting my comment is to prevent some startup on HN from reading the parent & deciding to classify itself as a “publisher” only to find later that was a huge mistake. I’m not a lawyer & they shouldn’t take my advice, but hopefully they’ll at least double check with someone who is.

prepend
Facebook certainly wants to be a platform based on their lobbying. But they are clearly a publisher by setting editorial guidelines and sponsoring and removing content.
weberc2
> If it's illegal, we'll take it down. Give us a court order. Otherwise we don't touch it.
kelnos
What about jurisdictions where the act of allowing something to be published will trigger a fine or other bad consequences? Doesn't it make sense to pro-actively remove things then?
namirez
> If it's illegal, we'll take it down. Give us a court order. Otherwise we don't touch it.

Interestingly the underlying assumptions in most analyses like this is that everyone lives in US and obeys US laws. What if China goes after a political dissident through Facebook? Even worse, what if Facebook propaganda is used to incite genocide by the government of Myanmar? Where do you get the court order to take it down?

Unfortunately the business model of Silicon Valley is at odds with this simplistic first-amendment/court-order sort of argument.

legostormtroopr
> Interestingly the underlying assumptions in most analyses like this is that everyone lives in US and obeys US laws.

Even better! Stop trying to act like global companies, and pick a jurisdiction to work out of. If Facebook is an American company (which they are), they are held to and follow US laws.

Maybe this means that Facebook doesn't remove Nazi propaganda because of the US 1st Amendment. What follows is actual proper competition, perhaps Germany bans Facebook, which means either a new German company GesichtBuch, can compete by being based in Germany and following german laws. Or, people enjoy Facebook and then rally their government for laws to be changed.

What we have now is companies trying to exist in every country, which means they have to follow the worst laws of every country.

BlueTemplar
Pretty much what happened with Google in China ?
wolco
So give up Germany (lost: billions in revenue) or give up Nazi propaganda (lost:thousands in revenue).

Companies are trying to make money. That's the purpose of a company.

prepend
So give up China (lost billions) or give up democratic propaganda.

It’s not the size of the market that should determine right or wrong. Having free speech laws means you get democracy and nazi stuff and spend time on more important stuff as to what’s too nazi or too democratic. Just rely on the rule of law.

wolco
It's the issue that should determine plus marketsize. Nazi symbols being sold or not is not going to push the needle. Pushing back on China for democratic injustices is what we should be doing.
namirez
> Or, people enjoy Facebook and then rally their government for laws to be changed.

It's not going to happen, because people can connect to FB easily using a vpn without giving FB the chance to monetize ads. It's already happening in countries that have banned FB.

rushabh
The big difference in the analogy is that these pipes enable creation of “public messages” and this is the exact reason they make money.

The moment you start running a “commons”, you own the responsibility of moderating it. Wikipedia is another great example where they acknowledged the difference of opinion in the early days and found ways to balance it.

If you are not providing “open / public” platforms (like GitLab), it’s okay not be responsible for who participates and let the law decide. Google, Facebook and Twitter don’t have the choice.

mhermher
They're more like broadcasters than carriers. The phone companies don't broadcast. If you broadcast, then you're going to run into this issue one way or another.
dvfjsdhgfv
The problem wasn't so much about "taking stuff down that wasn't illegal because somebody screamed." More problematic wa sthe fact that, in the case of Facebook, their algorithms promoted content that was more "engaging" (read: controversial), so basically the whole system turned out to be optimized for fake news.
sam1r
"Wow, that is such a clear analysis that someone should transcribe it."

There's no way the three could have predicted the show that would have been put on today.

Just like we couldn't have predicted the massive impact iphones/android [steve jobs + android] would have had in the past decade on today's society.

My two cents, i guess.

Alex3917
I mean you can say it's a clear analysis in the abstract, but I said basically the same thing here when CloudFlare shut down 8chan and everyone lost their shit:

https://news.ycombinator.com/item?id=20616412

MuffinFlavored
> "If it's illegal, we'll take it down. Give us a court order. Otherwise we don't touch it"

I think that works better on paper than in practice. If I get offended enough by something, can't I go try to sue under the premise of defamation?

prepend
If you sue successfully you can get a legal order to take down the content as part of the judgement.

Libel, slander, and harassment are all crimes or civil offenses (illegal). They work pretty well.

sparrish
Carriers can't be sued for the content they carry. It's been tried many times and so long as the carrier acts like a carrier (neutral channel), it fails.
adrr
Working at MySpace and getting sued by all the state attorney generals, that defense doesn’t work. We were ultimately responsible for the actions of our users including exploitation of minors which triggered the investigations.
tudorw
If Google, Facebook, and Twitter had been smart about this, they... would have consulted deeply with lawmakers, regulators, copyright owners and other deeply entrenched parties before doing anything... move fast, and, er, what was it again ?
arminiusreturns
Considering In-Q-Tels involvement, sometimes I think they major social media companies were setup as trojan horses to enable censorship and governmental control in the first place. (besides the more obvious surveillance aspect)
atupis
Problem is that is only a carrier creates a barrier to monetize the platform through ads. Nobody with a serious brand doesn't want to see ads next to nazi propaganda and some kinky furry porn.
elbrian
Excellent analogy! I'm so happy that telephone carriers do nothing to prevent my phone from being called by spammers 10x daily.
loceng
If you don't want to act as a steward of society - with your own leadership, governance, as a lead - if you don't want to be accountable to potential societal consequences, then sure, ignore the behaviour of bad actors or the irrational.

The problem has been lock-in and users unable to "take their network with them."

The idea of decentralization to me is being fully mobile and having the decision to be able to choose what network(s) you're part of, and in part that would be based on leadership, governance, rules that each platform can have in place - and ideally enforcing and evolving as necessary.

Imagine you could have Facebook but immediately you can decide who's leadership you're following for moderation and all other settings/decisions of the platform - and thus who your attention, money, goes to.

The government being in charge of moderating free speech - except for perhaps acting along with, setting rules for authority like police - is going to be very inefficient, bureaucratic, not leveraging the efficiencies of free market systems, capitalism.

Government could potentially play an important role, however private sector options are likely to exist sooner and be better. Let people choose who moderates their community, networks, allow the best options to naturally rise to the top - to act as role models for competitors to take notice and attempt to mimic the best parts.

loceng
This comments' votes have been going up and down since I posted it - funny to see.
arrrg
Public-facing communication has always been trickier and much more muddied.

Twitter is not like telephone (except direct messages I suppose).

A more apt comparison would be newspapers (which have publishers and editorial staff) or radio/TV (with similar structures).

So the analogy only really works if you claim something like “Twitter should have been like paper mills“, but I’m not sure whether that’s the right analogy or useful way of thinking about Twitter.

Sometimes things are just complex and I do actually think it’s wrong for Twitter to run away from this responsibility. They are the ones who want to run this public-facing way of communicating as a capitalist enterprise, they should be beholden to doing it right (whatever that means – but that’s the hard part and it’s ok that it’s hard).

te_chris
The day they started messing with ML driven results and TL's was the day the whole argument about neutrality went out the window.
spamizbad
Doesn’t make sense: a phone conversation is more akin to a gchat or messenger convo. A Facebook page or wall is more akin to publishing... and historically publishers have used considerable discretion about what gets published.
sanderjd
The problem with the analogy to the phone system is that conversations via phone are transient and private. There is no place with the AT&T logo on it where you can go and see that awful thing that person said. The internet itself is a much better fit for the phone company analogy, and indeed internet service and backbone providers have pretty much entirely been able to stay out of this.

A better (though still imperfect) analogy really is media: TV stations, newspapers, magazines, radio stations. And indeed you see similar problems there as the online media companies are facing now. The biggest difference is that traditional media companies were forced to make editorial choices because of physical constraints - physical space for print, airtime and spectrum for TV and radio - so this question of whether you could just let anyone say anything in your publication or on your broadcast just never really came up.

dmode
Naval, like most VC investors, think too highly of themselves and go on twitter tirades in areas where they have absolutely no clue what they are talking about. The quote above is a real example. Their views are tinged by living in advance Western societies, and in their libertarian bubbles. In real life, a platform like Google and Facebook, that has unprecedented reach globally, is the opposite of a private phone call. ISIS rode YouTube's lax enforcement policy to create a Caliphate and slaughter millions of people. Was Naval advocating Google to wait for an order from the Syrian government before taking out beheading videos ? In India, WhatsApp is used by religious extremists to form lynching mobs, resulting in deaths of multiple people just based on rumors. We are living in a world, where we have software platforms that hold unprecedented leverage to alter people's lives. We should try to move away from some one size fits all simplistic policies that are created in Sand Hill road thought bubble
neonate
If I understand the argument, he isn't saying that those horrible things are fine and should be allowed. He's saying that it's the government's job to do something about them. Once the tech companies took a bite out of that poisoned apple, their fate was sealed, to become government-controlled.
dmode
In many places across the world, there is barely any government to act on these things. A functioning government is limited to a handful of countries
Mvandenbergh
The problem with at least Facebook and Twitter (and possibly Google) in this analysis is that their attention-grabbing model requires them to analyse and interfere with the data which is coming through their systems. The only way not to do that would be for FB timelines and Twitter feeds to be strictly sequential feeds which at some point in the past they may have been but they are not now.

Once you start basing your business model on selectively making some content more visible, you have already crossed a rubicon between a strictly neutral common carrier and a publisher.

(There is an error in the quote where they use publisher when they mean common carrier, a publisher is precisely an organisation that does have full editorial control of what they put out which is the opposite of what they intend from the rest of the quote.

uchman
He oversimplified the issue. Phone companies don't make money through ads. FB/Twitter/Google make money by "getting into the weeds". They chose a business model with much more reward (hence much more risk). They're basically running ads businesses. You post about shoes, they sell you shoes. You talk about unemployment, they sell you a candidate ? Gitlab/Microsoft et al have slightly less dependent business models that aren't dependent on ads (for the most part).
BlueTemplar
People warned that this would happen when Facebook (YouTube, Steam, etc.) started sliding in this direction.

And years before Facebook was even created, a middle way was also proposed where a carrier like Facebook wouldn't be legally responsible for the content they carried, unless they started to select and/or moderate that content, at which point they would lose the carrier status, becoming a simple broadcaster, which would be legally responsible for everything that they would help to broadcast.

motivic
> unless they started to select and/or moderate that content, at which point they would lose the carrier status

But since the homepage feed (or any medium really) displays contents in a certain order, some selection must take place.

Typically some algorithm (usually a recommender system together with some business logic) is used to determine which contents from all that's available to you are actually shown to you and in what order.

Bias seems to be an unavoidable part of the design to me.

solveit
Everyone was fine with chronological order.
motivic
In that case one can just spam the same (or similar) post every second and your feed will likely be filled with that post.

I know I'm setting up a bit of a straw man here but my point is even chronological order can be exploited.

leereeves
And if you did that, people would unfriend/unfollow you. Problem solved.
Fnoord
> my point is even chronological order can be exploited

The question is, how easy can it be?

The straw man, is indeed, that you can filter such, client- or server-side. If you see the same shit the whole time, you ignore it all. Easy; even IRC clients with scripting had such features in the 90s. Here is a list of techniques used in e-mail filtering [1]

[1] https://en.wikipedia.org/wiki/Email_filtering#See_also

jsjohnst
> I know I'm setting up a bit of a straw man here

Especially as this thread has already mostly said spam filtering (which your example would easily trigger) isn’t in opposition towards a goal of neutral status.

roenxi
There are shades of grey. However, we have leaked footage of a Google co-founder saying (at an all-hands meeting no less) that the outcome of a democratic election conflicts with Google's values. There is a lot of room for interpretation there, but there are signals from Google in particular (eg, donation streams; leaked video; the occasional scandal bubbling out) that their management might be seeing the world through a partisan lens.

Abstract ideas of unavoidable bias are only of academic interest; the right wing of politics is justified in seeing Google as a direct political threat. That would not be justified if Google had a strict "no political talk, no political campaigning, we are the Switzerland of the internet" style policy for their workplace.

platz
Every industry sees the world though a partisan lens to some extent. Those criticisms also apply to energy, education, repair, etc
magduf
Exactly. Criticisms about Google's politicking are unjustified as long as lobbyists are allowed in other industries.
m10i
Would you mind providing a source for this leak?
roenxi
It was that one from late last yer. Might have been https://www.breitbart.com/tech/2018/09/12/leaked-video-googl... . There wasn't anything there that was particularly scandalous; it was just a really interesting that this was the state of affairs inside Google.
disgruntledphd2
I mean, it's OK if management have political views, as long as those views don't influence search results.

As long as the experimental process is rigorous and the people are incentivised to use the right metric (and the metric isn't politically biased), then it should be fine.

However, upon making that argument I find myself concerned at the possibility that Google's corporate interests (and perhaps some political interests) may shape which questions get asked, and thus the direction the service takes.

It's quite analagous to Chomsky's views of news organisations in Manufacturing Consent.

magduf
I honestly don't see the problem with Google using its position to affect peoples' viewpoints. If we're going to allow other industries (such as energy) to hire lobbyists or advertisers to change peoples' viewpoints, and even worse, going straight to the decision-makers with lobbyist $$$, then criticizing Google is hypocritical.

>the outcome of a democratic election conflicts with Google's values.

What's wrong with this? The outcomes of the previous Presidential elections conflicted with many other companies' values. Every political election's outcome conflicts with some company's values, because companies stand to gain or lose depending on the policies enacted by that politician.

roenxi
I'm not making a moral case. I do think there is a moral case as well but it is a very complicated do-unto-others-as-you-would-have-them-do-unto-you style one with some nuances that isn't going to fit into one comment. The case is that Google is potentially a direct threat to the right wing of politics. It would be prudent for the right wing to respond by trying to break Google up and neutering them as a platform, so that there are several successful competitors in all their markets. Realistically it is possible that the moderate left wing could be convinced as well - nobody is served by the risk that an entity as powerful as Google becomes an active propaganda platform. If they aren't even professing neutrality internally then they are on the way to becoming one.

Google could have avoided this situation by not explicitly championing political views inside their organisation.

Also the energy situations you cite aren't really comparable, the companies are only lobbying for things that make them more money and they don't have the same sort of power as Google in the political sphere.

magduf
It would be totally hypocritical for the right wing to break up Google. The Democrats tried that with Microsoft back in the late 90s, and as soon as Bush took office the case was dropped because Republicans don't believe in enforcing anti-trust law.

>the companies are only lobbying for things that make them more money

Every company does this if they can, and it's either going to help or hurt some political side. The case you're making here is that Google is bad for Republicans. Maybe, but coal companies are bad for Democrats (they give money to Republicans to help them win races), so why is this OK for coal companies, but not Google? I don't see the difference. As long as other companies or industries are allowed to influence politics with money, it's perfectly OK for Google to influence elections however they want, and it would be wrong to break them up because, as I said before, the Republican party is opposed to anti-trust law.

roenxi
> It would be totally hypocritical for the right wing to break up Google. The Democrats tried that with Microsoft back in the late 90s, and as soon as Bush took office the case was dropped because Republicans don't believe in enforcing anti-trust law.

Circumstances were different - Microsoft wasn't doing anything particularly political. They aren't pro-Republican. This is the difference between politically attacking an entity because it is a corporation (a bad reason) vs attacking because they are politically active (an acceptable reason).

That is the central point. Google are removing potential defences against a political attack.

> Maybe, but coal companies are bad for Democrats (they give money to Republicans to help them win races), so why is this OK for coal companies, but not Google?

It is OK for Google, they can donate to whoever they want to. The issue is if they are going to be an partisan actor they control too much information and have too much influence on how people gather information.

magduf
>It is OK for Google, they can donate to whoever they want to. The issue is if they are going to be an partisan actor they control too much information and have too much influence on how people gather information.

If they can donate to whomever they want, they are also morally correct to control information however they want. Giving money to politicians is bribery, and is much more direct than merely controlling information on the internet. Personally, as long as bribery is legal, I have no problems with Google using a different tactic. It's much more ethical to try to shape peoples' opinions at large than to directly bribe politicians.

roenxi
That argument ignores scale though, giving money to politicians directly may well be unethical, but it is a path that is open to everyone and is at least somewhat out in the open. Compared to that, Google basically is the internet for a large chunk of people and tracking how they use their index is practically impossible.

Compared to news media where the actors are highly partisan but there are strong voices and opportunities to be heard for all points of view. The alternatives are a lot thinner for Web search and Youtube; and most people would be shocked if it did turn out they were actively pushing a message.

Besides, I'd expect political donation laws to come under attack to. It is a very political question. Google should have stuck to strategies and pronouncements that are neutral so that they were less likely to get involved in partisan politics.

It doesn't really matter whether you see it as ethical or not; what matters is that Google has huge and largely unchallenged reach in a field and appear to be official stances by management on social issues that they do not need to. This makes them a legitimate political target.

BlueTemplar
> If we're going to allow other industries (such as energy) to hire lobbyists or advertisers to change peoples' viewpoints, and even worse, going straight to the decision-makers with lobbyist $$$, then criticizing Google is hypocritical.

I'm curious that you would assume that as being the default position ?

ahnick
I think it depends on how the recommendation algorithms are implemented. (I have no idea about how Facebook and Twitter recommendation algorithms work)

Let's say for example you have a community contributed labeling system that allows users to label content. A recommendation algorithm might then be written that simply pulls the top 10 highest number of viewed pieces content that have the same labels as a piece of content the viewer just viewed. In this case, there is nothing in the algorithm that evaluates the content itself. The evaluation was done by users of the software, not the software itself.

wmf
Simple "neutral" algorithms are generally easy to game. If you choose an algorithm that you already know will be gamed for harassment or whatever then you're essentially choosing that outcome.
ahnick
The labeling algorithm is just a relatively naive example. More complex "neutral" algorithms can be designed. Mitigations for abuse that are "neutral" themselves may be added before/after initial implementation. (e.g To extend the labeling example, a moderation system could be added that allows users to flag inappropriate labels.) Also, I don't think it is always possible to foresee in what ways a system/algorithm might be abused; therefore, it might be necessary to address said abuse once it reveals itself.
philwelch
I don't think it's been convincingly argued that ranking or recommendation algorithms necessarily represent any sort of editorial bias. The purpose of these algorithms is to rank content based either on a prediction of what the individual user would want to see based on their existing history or on which content is more broadly engaging in general. In other words, if YouTube decides I really like to watch Ben Shapiro owning the libs with FACTS and LOGIC, that doesn't mean YouTube is making an editorial decision to promote that content, it just means YouTube has profiled me as some sort of conservative. If YouTube profiled me differently, they would be equally happy to recommend TYT videos or whatever.

There may be some editorial bias introduced if the userbase of a given social media site is themselves skewed in a particular direction. For instance, there seems to be some skew in the content that shows up on my Tumblr dashboard that seems fairly irrespective of anything I've subscribed to. But even that isn't necessarily damning for Tumblr, because they're not necessarily making an editorial decision to promote that content.

rjf72
There's a very simple solution here that avoids any and all bias and hidden agendas: simply make everything transparent.

- Users get to create their own profile. For instance in your profile you get to selected whether you would like to see content geared towards men/women/any. And this would follow for a whole slew of other topics, similar to the opaque profile companies are creating on users today without their consent or input.

- All recommended videos have all their profile tags openly shown. And the reason for recommendation is also completely open.

The cool thing about it is that this also works really well with the current neural network based recommendation systems since they output a weighting of things. Simply transform those weightings into a nice output and the user can see exactly what's going on and exactly how they're being viewed. As a nice aside this would also remove a big chunk of the incentive for increasingly aggressive spying as companies try to get ever more extensive profiles on each and every person that uses their products.

One downside here is that users might not be consciously aware of their own preferences. This could again be resolved transparently. "Hey PhilWelch, we've noticed you like videos about underwater basketweaving. Would you like for this interest to be ticked in your profile? [Yes, No, No + Never ask again]

Nasrudith
That would be awesome for advanced users but unfortunately bad for mass ergonomics. A shockingly high percentage have never cracked open the settings page even once.
rjf72
Why would it be hidden away in some settings instead of right in your face with a nice fat button "Change how my videos are recommended..."? Even beyond that though, most online platforms today offer very limited general customization - and so there's not much reason for users to go digging through settings. They are designed to work primarily with the default configuration, and so that becomes a self fulfilling prophecy.
keepper
If you think ML ranking can’t or doesn’t have bias, you clearly have no idea about ML. Training data can and does have unintended biases, so can classifiers.

Please don’t comment so decisively in an area you only have cursory knowledge. ALL major sites are constantly trying to reduce bias in ranking, and there’s many papers published by google, Facebook, and others on this.

Ps: bias as defined as data bias.

Disclaimer: I work in ML ranking for a large site being discussed in this thread.

ikeyany
These platforms (these meaning the ones with the most market cap) need to be advertiser-friendly in order to grow. They inherently require editorial bias.
philwelch
I'm not an expert on advertising, but I suspect these concerns are often overblown. There are YouTubers who get demonetized--presumably because their content isn't "advertiser-friendly"--who still manage to get individual sponsorship deals. If these companies were serious about being a "common carrier", they would invest more money into developing an advertising market where advertisers could pick and choose the content they wanted to advertise and less money into demonetization.
afiori
demonetized video still get ads, it is a two tier system where ad space on demonetized videos is lower in price.
philwelch
Even if that’s true—which I somewhat doubt—YouTube basically makes demonetized videos disappear in terms of recommendations. They don’t admit it but every YouTuber I’ve seen get demonetized confirms that views basically stop after demonetization, aside from subscribers.
tomp
No, if they instead worked to develop better user moderation tools, it would work just as well as it does now, just without their (appearance of) liability. Things like word filters, subscribable block lists, etc.
dvfjsdhgfv
> The problem with at least Facebook and Twitter (and possibly Google) in this analysis is that their attention-grabbing model requires them to analyse and interfere with the data which is coming through their systems.

Not to mention a completely different topic - ads for scams. When I create a new ad on FB, they keep it in review for at least a couple of hours. But every day I see dozens of ads that are obviously dishonest. Some financial scams, ads for some fishy schemes, some of them badly translated.

When you are a publisher and accept an ad, obviously it's very tempting to claim all responsibility is with the advertiser (it is not, although details vary between jurisdictions). But even if you didn't have any responsibility legally, it still feels terribly wrong from the ethical point of view to get rich by helping scammers cheat some poor chaps of their money.

mlang23
Not on a daily basis, but I see ads for scams on youtube as well.

Different topic, but the strangest thing I witnessed on youtube was ads for audible during a stolen audible book... So the owner of the content payed the thief (indirectly, but still) for placing an ad during performance of the stolen content. And youtube couldn't care less, as long as money moves...

iamsb
This is where twitter screwing over third party developers comes in to play. I would have preferred if twitter main feed was sequential and then there were thriving third party alternatives to it. Twitter could have easily monetised by taking a slice of ad revenue from third party developers, or by charging API fees to them.
chr1
They could make the algorithm customizable and let the user to control it. That would be much better than the current blackbox where you get uninteresting stories promoted because of misclicking once, and have no way to tell the algorithm about its mistakes.
luckylion
> They could make the algorithm customizable and let the user to control it.

It often gets it wrong, but that's the general idea, isn't it? You don't customize it by checking some boxes or adding some keywords, you customize it by giving attention to some results more than to others.

Obviously, there's the added layer that they aren't optimizing for your satisfaction but for your business value to them.

evrydayhustling
This business model is the reason that FB, T and G are dominant platforms worth discussing. Products that are purely transactional are easy to compete with, splitting up markets and driving down prices. This is why true "common carriers", like ISPs, have fought desperately against net neutrality and to capture more customer data -- they want in on the value capture that comes from shaping content, because they are rapidly being commoditized as carriers of bits.

IMO this is a moat in the eye of a lot of libertarian tech philosophy. Most software products become competitive by building networks -- with end users (e.g. social networks), with developers (e.g. open source), or indirectly via content (from media aggregators to spam filtering). Once you're in the business of curating a network, you are implementing values whether you acknowledge them or not.

pishpash
Publishers don't exercise editorial control? Come on now.
davidw
The problem is also that they are advertisers, and I'll be damned if I want ads for my company appearing next to "Illinois Nazis" or ISIS or whatever other garbage.

There's another problem: it becomes a race to the bottom if it's nothing but trolls and garbage content.

mcny
> The problem is also that they are advertisers, and I'll be damned if I want ads for my company appearing next to "Illinois Nazis" or ISIS or whatever other garbage.

I don't understand the problem. Why can't Toyota or Proctor and Gamble say it only wants to bid on a certain white list of channels or videos? The only downside is a smaller reach for the advertiser and a smaller "pie" for the "content" company (be it Instagram or YouTube).

I think the main problem is the "content providers" be it Instagram or YouTube want to maximize the ad revenue and any suggestion I have such as strictly reverse chronological timeline by default goes against the idea of "growth hacking".

I would personally love to be a fly in the wall in Netflix or Spotify headquarters. Because in my naive mind, Netflix makes MORE money when people subscribe to Netflix but never watch anything. But then if people watch Netflix a lot, they are more likely to talk about what they watch to others and encourage others to subscribe to Netflix?

Any company probably wants to make sure they make more money the more customers they have? Kind of sounds obvious but I don't think it is the case with every company. I'd imagine there are some businesses where there is an ideal size and if you are bigger, the additional customers kind of follow a law of diminishing returns? I can't think of any examples but would love to hear if anyone reading this can...

flukus
And the problem this creates for society is that we censor content that isn't advertiser friendly. We've already seen where this leads with broadcast television, lowest common denominator trash that is safe for advertisers, escaping that advertiser optimum made youtube good in the first place.

I subscribe to a lot of history channels so I've seen the effects of this filtering directly, history contains a lot of talk about nazis and hitler and you can't even mention the big H without being demonetized.

But this is just another example of how advertising is a cancer on society.

jyrkesh
Exactly. At face value, I'm firmly in favor of what Ravikant was saying, but even the simple example of a spam filter proves how it's impossible in practice: without basic spam filtering, something like Twitter would be absolutely unusable in the face of endless crypto and malware spam. At some level, Twitter/FB/et al have to do SOMETHING to shape/filter the volume of traffic they receive.
xiphias2
At Google the top metric used to be relevance for a long time (until Amit Singhal was kicked out) for a good reason: it's a metric that kicks out spam, but doesn't pick sides.

Times and leadership has changed since though.

magduf
All good things...
bmc7505
> without basic spam filtering, something like Twitter would be absolutely unusable in the face of endless crypto and malware spam

I don't understand this argument at all. Is it so difficult for users to curate the list of accounts they follow?

Domenic_S
Replies. It would be impossible to follow a thread. Even today it can get rough following a thread spawned from an elon musk tweet, given all the crypto-scamming.
anm89
I think there is a "I know it when I see it" type of argument to say, we will make the strongest effort possible to never censor anything that is not attempting to sell a product but we reserve the right to force advertisements to go through a specific channel.

For better or worse the types Of things which get called to be banned from a platform for being unacceptable speech rarely resemble the specific styles of advertising we classify as spam.

Another argument is let users be ther own spam control.have some idiot acquaintance from middle school who keeps spamming pyramid scheme and/or nazism? Fine, just unfriend them.

As long as you are given control over what content is pushed at you directly, there is no need to sensor the whole platform because of a specific undesirable.

6gvONxR4sf7o
That's interesting because we desperately need spam filtering on our phones right now.
eru
Firefox on Android allows extensions. Including ad blockers.
6gvONxR4sf7o
I was referring to filtering spam phone calls.
eru
Oh, those are annoying. I've been getting some lately, but not nearly as many as other people seem to be getting.

I think Google lets you report numbers as spam? (And, of course, caller id can be faked, but the important bit is that spam will differ statistically from legitimate calls.)

hamhock666
I think if there are going to be filters, then it would make sense for users to be able to opt in or out of filters. That way you can choose what you want to see, instead of having an almighty force that determines what you see. Having global filters that users can't turn off, besides to filter out illegal content, needlessly take away freedom from users.
Fnoord
Usenet dealt with this via killfile. There is no reason why you cannot do client-side filtering (compare /silence with /ignore in IRC). Client-side and server-side each have their pros and cons.
dredmorbius
Usenet also had Cancelmoose and the Cabal (TINC).

Both individual and administrative filters are necessary.

the8472
At least local spam filters are purely receiver-side filtering, not prioritization. They could still provide users tools to pick for themselves instead of making editorial choices for them.
cryptonector
Anything can be spam because spam is in the eye of the beholder, but nonetheless, politics is generally not spam. You can have a politically-neutral policy against undesirable content that passes the smell test. It does require that you a) use reasonably objective terms in the definition of "spam", and b) that you resist the inevitable pressure to reinterpret those terms to cover that which others find undesirable political content.
shkkmo
Not true at all.

Spam is unwanted email that you did not sign up to receive and sent from someone who you do not know.

TheOtherHobbes
Actually politics generally is spam - in the sense that much of it is knowingly dishonest self-promotion of individuals and social groups.

Just because politics isn't trying to sell you a penis extension doesn't mean it isn't trying to sell you something far worse.

It doesn't even matter how political content is published. What matters is that currently voters get more protection from a faulty toaster than from lies knowingly generated by politicians and corporate PR outfits.

The creation of deliberately false "talking points", fake news, smear campaigns, organised trolling and astroturfing, fake reviews, fake feedback, and other kinds of cognitive pollution should be banned from all forms of media.

Which is not to say this is easy or straightforward. But attempts should be made - because lying to voters and consumers is absolutely toxic to genuine democracy.

eanzenberg
There’s something fascinating when you equate penis enlargement spam with political speech.

Btw, laying in politics has happened for like forever. Is there any evidence that it’s worse than in the past?

Nasrudith
Both are messages coming to you. Just because they have the right to metaphorically knock on your door doesn't mean both campaigners and vacuum cleaner salesmen don't get the door slammed on them and doormen won't tell both to scram if they try to visit apartments unsolicited. The difference in categort broadness seems classically strawmen as well.
rayiner
I don't think calls to regulate Twitter/FB would get much traction if all they were doing was spam filtering. To see it from a different direction: imagine the shit show that would happen if Google started "moderating" gmail content, even though it currently does very aggressive spam filtering.
throwaway1777
It’s still not so easy. At what point does “fake news” become spam?
eanzenberg
Its pretty obvious what’s spam and what isn’t. “Fake news” is extremely subjective, so try not to anger half the country :)
yters
Google does censor my email to some extent. It send a number of conservative emails straight to spam.
mch82
Is it doing that because they are “conservative emails” or because they violate spam rules?

If those emails are flagged because they’re “conservative” then I agree with you that needs to he corrected.

However, if those emails are flagged because they violate well known spam rules then let’s help “conservative” email senders to follow the rules. Examples include not using single or double opt-in for signup, not honoring or providing an opt-out link, using emails from a bulk list that includes known flagged emails, not including a plain text copy of the email in the message, and originating from a flagged server. Reputable email services like MailChimp go to great lengths to educate email list admins how to follow these rules & curate high value email lists.

mixmastamyk
Needs to be more specific as well. Is it intelligent conservatism in the vein of Bill Buckley or a knuckle-dragging ad hominem attacks from the lowbrow right?
oarsinsync
Gmail thankfully sends all the republican and democratic party emails I receive straight to spam.

One year, I made a concerted effort to troll some friends by getting them bumper stickers from the opposing party that they supported (e.g. dems got republican bumper stickers, repubs got democrat bumper stickers).

Since then, despite being sure to opt out of mails at the time of purchase, and clicking 'unsubscribe' numerous times, both parties continue to spam me.

Gmail filters these emails into spam for me. Thank goodness.

I'm led to believe that if enough gmail users mark emails as spam, this will get fed across to other users that haven't. So you can probably blame people like me for marking these messages as spam, for it occurring to you.

Or y'know, they could actually honor the 'unsubscribe' request and stop spamming me, and I wouldn't have to mark it (legitimately) as spam.

adventured
Then it doesn't censor your email at all.

You can freely view the emails in your spam folder, with one click. You can optionally move the emails to your inbox and train Gmail ("not spam" button) not to place them there in the future.

It's going too far to call it censorship (in any manner) when an email provider places an email into a spam folder that requires one click to view/review.

winkeltripel
A lot of overt spam (when gmail is most certain that it's spam) doesn't end up in the spam folder. A counter is incremented and the message is thrown out.
jsjohnst
Unless you are talking about email coming from the most blacklisted of IPs, that’s not true.
w1nst0nsm1th
It's not censorship. It's basic hygiene.
Spooky23
Political emails tend to get tagged as spam in general. I have someone close to me who is involved in local Democratic Party and progressive organizations. When you go to an event or donate money (say an award banquet), they share your name and spam the crap out of you. A lot of that mail gets tagged as spam as a result.

Conservative outlets work similarly, but they seem to have bigger online operations and are hard to get out of. My in laws get barraged with this stuff, and a mail provider like GMail or Microsoft gets a lot of spam signals from it I’m sure.

yters
Some of the conservative emails I did not opt in, so could be considered spam. Others I did opt in and it still gets sent to spam.

On the other hand I get unsolicited email from some Democrat groups, and I never see such emails in spam.

magduf
I'm liberal, and lots of the unsolicited Democrat emails go to my spam folder.
QualityReboot
I've heard people in the political world talk about buying and selling email lists, which I thought was illegal, but seems to be common practice.
mixmastamyk
In the early 90's I gave something like $20 to the Sierra Club or a similar group. After a deluge in physical mail asking for donations from around the political spectrum, I never did it again. I left no forwarding address after moving either. Pretty dumb on their part, they traded all their goodwill for a short-term dollar.
mlang23
Political emails are spam. No matter from which direction they come from. So tagging them as spam seems just the right thing to do.
eru
It's only spam, if you don't want it.

If you explicitly solicit those emails, eg by signing up to a newsletter, it's not spam.

Errancer
Political emails often are spam but not all of them. I subscribed to few newsletters from various political parties and I don't want google to block them from me.
tomp
Indeed. Spam filters, porn filters, swearword filters... all of those have precedents (in mainstream, non-social/internet media) and are perfectly acceptable by most of society.

In addition, notice that even GMail doesn't actually filter spam. It just hides it - it's an assistant, not arbiter of what you should see. It's more than happy for you to adjust the filters (for yourself personally).

Edit: GMail also does prioritization as of recently - but again, the purpose of that is to help users not filter content. Again, those algorithms are also adjustable for each user separately, they just have some "smart" defaults.

tambourine_man
Kind of. You can't turn their anti-spam off, you can't exclude addresses from it and, worse for me, Gmail can hard bounce if it doesn't like something in your server/IP/content in a way that's not fully documented or reproducible.
stubish
You can now. You can create a filter matching whatever criteria you want (including from:[email protected] or a catch all), and specify matches 'never get flagged as spam'. Its not obvious, but it is there. And the usual limitations of filters apply, such as not being able to match arbitrary headers.
krageon
Things can get flagged outside of your mailbox as so spammy that they will never arrive. This is how every spam filter works that I have seen to date and google is no exception.
oarsinsync
That only works on mail that actually gets delivered to your mailbox. Gmail can and will:

- Reject delivery of mail from sources it has determined to be spammy (e.g. residential IPs) - this mechanism allows the sender to be aware that delivery failed.

- Accept delivery and then silently drop the message somewhere between after accepting delivery but before appearing in the end user's mailbox.

In either case, there is nothing you (the mailbox holder) can do. In the second case, there is no way for anyone to know the message hasn't been delivered.

spiraldancing
>GMail doesn't actually filter spam. It just hides it

This is not accurate. There is a long on-going issue with GMail (and others) wholesale blocking delivery of email from domains/IPs/providers it deems to be spam or otherwise inappropriate. GMail users get no indication/notification, and generally have no idea that email sent to them was simply not delivered.

You comment actually touches some interesting points.

First, one procrastinates only if they decided they should be doing something else at this moment. I am now relaxed and interested in spending some time on HN - you can actually learn something here as well.

Second, being constantly busy, optimising, and improving is a new fashion and decease. It's an unrealistic demand that causes anxiety. You also need to relax, reflect, play, do nothing. https://youtu.be/3qHkcs3kG44?t=1102

Third, it also depends on what is your life goal - be happy, be super successful. I am happy to spend some time on HN, and I do not stress myself over it.

Also as a strict materialist, after reading estimates from lots of different people from lots of different disciplines, and integrating and averaging everything, I think we'll have likely have human-level or above AGI around 2060 - 2080. I think it's relatively unlikely it'll happen past 2100 or before 2050. I'd even consider betting some money on it.

I'm kind of coming up with these numbers out of thin air, but as much of a legend as he is, I agree Carmack's estimate seems way too optimistic to me. It's possible, but unlikely to me.

That said:

>The emphasis on computational power also makes no sense to me. If we had infinite compute today, what steps would you take to build AGI? Does anyone have any good ideas about that?

In this interview with Lex Fridman and Greg Brockman, a co-founder of OpenAI, he says it's possible increasing the computational scale exponentially might really be enough to achieve AGIc: https://www.youtube.com/watch?v=bIrEM2FbOLU. (Can't remember where he said it exactly, but I think somewhere near the middle.) He's also making a lot of estimates I find overly optimistic, with about the same time horizon as Carmack's estimate.

As you say, it can be a little confusing, because both John Carmack and Greg Brockman are undoubtedly way more intelligent and experienced and knowledgeable than I am. But I think you're right and that it is a blindspot.

By contrast, this JRE podcast with someone else I consider intelligent, Naval Ravikant, essentially suggests AGI is over 100 years away: https://www.youtube.com/watch?v=3qHkcs3kG44. I think he said something along the lines of "well past the lifetimes of anyone watching this and not something we should be thinking about". I think that's possible as well, but too pessimistic. I probably lean a little closer to his view than to Carmack's, though.

mirceal
I believe that 100 years is optimistic. I would say that it's hundreds of years away if it's going to happen at all.

My bet is that humans will go the route of enhancing themselves via hardware extensions and this symbiosis will create the next iteration(s) in our evolution. Once we get humans that are in a league of their own with regards to intelligence they will continue the cycle and create even more intelligent creatures. We may at some point decide to discard our biological bodies but it's going to be a long transition instead of a jump and the intelligent creatures that we create will have humans as a base layer.

meowface
Carmack actually discusses this in the podcast when Neuralink is brought up. He seems extremely excited about the product and future technology (as am I), but he provides some, in my opinion, pretty convincing arguments as to why this probably won't happen and how at a certain point AGI will overshoot us without any way for us to really catch up. You can scale and adjust the architecture of a man-made brain a lot more easily than a human one. But I do think it's plausible that some complex thought-based actions (like Googling just by thinking, with nearly no latency) could be available within our lifetimes.

Also, although I believe consciousness transfer is probably theoretically achievable - while truly preserving the original sense of self (and not just the perception of it, as a theoretical perfect clone would) - I feel like that's ~600 or more years away. Maybe a lot more. It seems a little odd to be pessimistic of AGI and then talk about stuff like being able to leave our bodies. This seems like a much more difficult problem than creating an AGI, and creating an AGI is probably the hardest thing humans have tried so far.

I'd be quite surprised if AGI takes longer than 150 years. Not necessarily some crazy exponential singularity explosion thing, but just something that can truly reason in a similar way a human can (either with or without sentience and sapience). Though I'll have no way to actually register my shock, obviously. Unless biological near-immortality miraculously comes well before AGI... And I'd be extremely surprised if it happens in like a decade, as Carmack and some others think.

mirceal
I'm no Carmack but I do watch what is happening in the AI space somewhat closely. IMHO "brain" or intelligence cannot exist in void - you still need an interface to the real world and some would go as far as to say that consciousness is actually the sensory experience of the real world replicating your intent (ie you get the input and predict an output or you get input + perform an action to produce an output) plus the self referential nature of humans. Whatever you create is going to be limited by whatever boundaries it has. In this context I think it's far more plausible for super-intelligence to emerge and be built on human intelligence than for super-intelligence to emerge in void.
meowface
How would this look, exactly, though? If you're augmenting a human, where exactly is the "AGI" bit? It'd be more like "Accelerated Human Intelligence" rather than "Artificial General Intelligence". I don't really understand where the AI is coming in or how it would be artificial in any respect. It's quite possible AGI will come from us understanding the brain more deeply, but in that case I think it would still be hosted outside of a human brain.

Maybe if you had some isolated human brain in a vat that you could somehow easily manipulate through some kind of future technology, then the line between human and machine gets a little bit fuzzy. In that respect, maybe you're right that superintelligence will first come through human-machine interfacing rather than through AGI. But that still wouldn't count as AGI even if it counts as superintelligence. (Superintelligence by itself, artificial or otherwise, would obviously be very nice to have, though.)

Maybe you and I are just defining AGI differently. To me, AGI involves no biological tissue and is something that can be built purely with transistors or other such resources. That could potentially let us eventually scale it to trillions of instances. If it's a matter of messing around with a single human brain, it could be very beneficial, but I don't see how it would scale. You can't just make a copy of a brain - or if you could, you're in some future era where AGI would likely already have been solved long ago. Even if every human on Earth had such an augmented brain, they would still eventually be dwarfed by the raw power of a large number of fungible AGI reasoning-processors, all acting in sync, or independently, or both.

mirceal
yes. we probably have different definitions for AGI. For me artificial means that it’s facilitated and/or accelerated by humans. You can get to the point where there are 0 biological parts and my earlier point is that there would probably be multiple iterations before this would be a possibility. If I understand you correctly you want to make this jump to “hardware” directly. Given enough time I would not dismiss any of these approaches although IMHO the latter is less likely to happen.

also, augmenting a human brain for what I’m describing does not mean that each human would get their brain augmented. It’s very possible that only a subset of humans would “evolve” this way and we would create a different subspecies. I’m not going to go into the ethics of the approach or the possibility that current humans will not like/allow this, although I think that the technology part would not be enough to make it happen.

Jun 08, 2019 · 1 points, 0 comments · submitted by HNLurker2
Interesting. Small world. It is just a few hours ago that I listened to Naval @ JRE: https://www.youtube.com/watch?v=3qHkcs3kG44
He was on the Joe Rogan Experience 2 days ago: https://www.youtube.com/watch?v=3qHkcs3kG44. He goes back on some of those tweets and talks about happiness, work, society, social networks, AI, robotics, energy…
Jun 05, 2019 · 6 points, 0 comments · submitted by hourislate
HN Theater is an independent project and is not operated by Y Combinator or any of the video hosting platforms linked to on this site.
~ yaj@
;laksdfhjdhksalkfj more things
yahnd.com ~ Privacy Policy ~
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.