Decentralizing the World Wide Web
September 12, 2018 12:37 PM Subscribe
The decentralised web, or DWeb, could be a chance to take control of our data back from the big tech firms....a group of 800 web builders and others – among them Tim Berners-Lee, who created the world wide web – were meeting in San Francisco to discuss a grand idea to circumvent internet gatekeepers like Google and Facebook. The event they had gathered for was the Decentralised Web Summit, held from 31 July to 2 August, and hosted by the Internet Archive. The proponents of the so-called decentralised web – or DWeb – want a new, better web where the entire planet’s population can communicate without having to rely on big companies that amass our data for profit and make it easier for governments to conduct surveillance.
“The services are kind of creepy in how much they know about you,” says Brewster Kahle, the founder of the Internet Archive.This is coming from an individual who has made it his life's goal to archive the entire internet, regardless of the wishes of the people whose work he's backing up.
posted by NoxAeternum at 12:45 PM on September 12, 2018 [11 favorites]
Honestly, while I can sympathise with some of the ideas this article is haphazardly mashing together, the article itself is a hopeless mess. Hello blockchain, hello micropayments, etc.
My favourite quote: "Our laptops have become just screens. They cannot do anything useful without the cloud". I'd suggest not buying a Chromebook next time...
posted by pipeski at 12:52 PM on September 12, 2018 [11 favorites]
My favourite quote: "Our laptops have become just screens. They cannot do anything useful without the cloud". I'd suggest not buying a Chromebook next time...
posted by pipeski at 12:52 PM on September 12, 2018 [11 favorites]
Honestly, while I can sympathise with some of the ideas this article is haphazardly mashing together, the article itself is a hopeless mess. Hello blockchain, hello micropayments, etc.
It basically comes across as "we should be the ones running the internet". Again, Kahle's hypocrisy illustrates the point - he's complaining about how much data the big services hoover up, while engaging in much the same conduct himself. Not to mention that their comments on harassment and illegal activity leave a lot to be desired.
Ultimately, they're trying to create a technological solution to a social problem, one that they refuse to try to understand.
posted by NoxAeternum at 1:04 PM on September 12, 2018 [10 favorites]
It basically comes across as "we should be the ones running the internet". Again, Kahle's hypocrisy illustrates the point - he's complaining about how much data the big services hoover up, while engaging in much the same conduct himself. Not to mention that their comments on harassment and illegal activity leave a lot to be desired.
Ultimately, they're trying to create a technological solution to a social problem, one that they refuse to try to understand.
posted by NoxAeternum at 1:04 PM on September 12, 2018 [10 favorites]
This article makes very little sense. You have privacy concerns from large companies using your data. But to some extent, you choose to give them that data, and no amount of "putting http on the blockchain" is going to fix that issue. You could do it through changing incentives, either through regulation or societal value shifting, but making people pay micropayments to access webpages is never going to work.
posted by demiurge at 1:09 PM on September 12, 2018 [9 favorites]
posted by demiurge at 1:09 PM on September 12, 2018 [9 favorites]
>This is coming from an individual who has made it his life's goal to archive the entire internet, regardless of the wishes of the people whose work he's backing up.
>...he's complaining about how much data the big services hoover up, while engaging in much the same conduct himself
I don't think this comparison is accurate. The Archive records the public-facing web, not "privately" (for lack of a better term) posted thoughts and images. The Archive doesn't then deep-analyze this data and use it to target ads. It's a public service with huge benefits for millions and is not for profit.
The simple fact that it records large amounts of information doesn't put it in the same class as companies like Google and Facebook, which surreptitiously track, store, and collate behaviors of individuals with the primary purpose of advertising to them or selling them a product. There's a huge difference here and conflating them to me seems disingenuous.
Any project that scrapes the web is going to have some privacy implications and will collect data that people may not have intended to be stored elsewhere. But that's a byproduct of the public, decentralized web as it originally operated. Putting something on the publicly accessible internet is a form of consent for reading and storing (your computer does it automatically for everything you see online) and Archive does that in a systematic way that's important for the web, not to mention for journalists, developers, and curious people.
Certainly we need feedback mechanisms and ways for takedown and copyright requests to be handled properly. Archive doesn't have a compliance team like YouTube's building AI to automatically detect and take down content. It doesn't have a 20,000-person moderation team like Facebook (they outsource most of it of course). And even those don't work. So the problems of unwanted duplication or bad content are so far unsolved even by the largest and most advanced tech companies in the world.
posted by BlackLeotardFront at 1:16 PM on September 12, 2018 [64 favorites]
>...he's complaining about how much data the big services hoover up, while engaging in much the same conduct himself
I don't think this comparison is accurate. The Archive records the public-facing web, not "privately" (for lack of a better term) posted thoughts and images. The Archive doesn't then deep-analyze this data and use it to target ads. It's a public service with huge benefits for millions and is not for profit.
The simple fact that it records large amounts of information doesn't put it in the same class as companies like Google and Facebook, which surreptitiously track, store, and collate behaviors of individuals with the primary purpose of advertising to them or selling them a product. There's a huge difference here and conflating them to me seems disingenuous.
Any project that scrapes the web is going to have some privacy implications and will collect data that people may not have intended to be stored elsewhere. But that's a byproduct of the public, decentralized web as it originally operated. Putting something on the publicly accessible internet is a form of consent for reading and storing (your computer does it automatically for everything you see online) and Archive does that in a systematic way that's important for the web, not to mention for journalists, developers, and curious people.
Certainly we need feedback mechanisms and ways for takedown and copyright requests to be handled properly. Archive doesn't have a compliance team like YouTube's building AI to automatically detect and take down content. It doesn't have a 20,000-person moderation team like Facebook (they outsource most of it of course). And even those don't work. So the problems of unwanted duplication or bad content are so far unsolved even by the largest and most advanced tech companies in the world.
posted by BlackLeotardFront at 1:16 PM on September 12, 2018 [64 favorites]
I see a big difference between archiving websites that were meant to be as public as possible and hovering up personal data that was never meant to be shared.
posted by Segundus at 1:16 PM on September 12, 2018 [24 favorites]
posted by Segundus at 1:16 PM on September 12, 2018 [24 favorites]
This seems like people proposing a whole bunch of new tech when, really, the tech for a decentralized web already exists: it's what we did before everyone started acting like Facebook, Twitter and Google were the entirety of the web. A return to personal websites, along with some decentralized social networks, would solve much of the problem.
Of course, those big centralized services made it easy for people to share information, even if they would otherwise not be technically proficient enough to set up a website of their own, so a return to a more decentralized approach, while generally good, might leave a lot of people feeling left out.
Basically, like NoxAeternum said, it's a social problem far more than it is a technical one.
posted by asnider at 1:21 PM on September 12, 2018 [7 favorites]
Of course, those big centralized services made it easy for people to share information, even if they would otherwise not be technically proficient enough to set up a website of their own, so a return to a more decentralized approach, while generally good, might leave a lot of people feeling left out.
Basically, like NoxAeternum said, it's a social problem far more than it is a technical one.
posted by asnider at 1:21 PM on September 12, 2018 [7 favorites]
I was on board with what this article was laying down until I hit the part about everything depending on blockchain.
posted by Strange Interlude at 1:25 PM on September 12, 2018 [10 favorites]
posted by Strange Interlude at 1:25 PM on September 12, 2018 [10 favorites]
This is coming from an individual who has made it his life's goal to archive the entire internet, regardless of the wishes of the people whose work he's backing up.
I don't think that's accurate. The Internet Archive respects robots.txt exclusions and you can apparently also request that your content be removed from it. There were some articles from 2017 about them planning to stop respecting robots.txt directives generally, but that must not have come to pass since I've personally run into the blocks very recently when trying to archive some new articles from a media website.
posted by cosmic.osmo at 1:30 PM on September 12, 2018 [25 favorites]
I don't think that's accurate. The Internet Archive respects robots.txt exclusions and you can apparently also request that your content be removed from it. There were some articles from 2017 about them planning to stop respecting robots.txt directives generally, but that must not have come to pass since I've personally run into the blocks very recently when trying to archive some new articles from a media website.
posted by cosmic.osmo at 1:30 PM on September 12, 2018 [25 favorites]
cosmic.osmo One problem with their respecting robots.txt is that if someone takes over a domain, they can slap an archive.org exclusion into robots.txt, and then access to all previous captures for that URL go away. For several I lost access to captures of my old website hosted on a friend's server and domain, because the people who bought electricstoat.com after it expired (and who the hell would buy that) had slapped a robots.txt exclusion on the domain.
posted by SansPoint at 1:45 PM on September 12, 2018 [14 favorites]
posted by SansPoint at 1:45 PM on September 12, 2018 [14 favorites]
So, you take the worst solution of the 1990s (micropayments) and merge it with the worst solution of the 2000s (blockchain)?
Decentralization is nice-- hey, I still have my own domain and website!-- but there's a reason people went to the big sites: 'cos all their friends were there. I don't see anything in the article that prevents this happening again.
posted by zompist at 2:12 PM on September 12, 2018 [5 favorites]
Decentralization is nice-- hey, I still have my own domain and website!-- but there's a reason people went to the big sites: 'cos all their friends were there. I don't see anything in the article that prevents this happening again.
posted by zompist at 2:12 PM on September 12, 2018 [5 favorites]
A polarising article. Interesting, given that I haven't read it. Instead, I'm looking at the comments, and thinking other thoughts about the way decentralisation backfires on the people who think it brings both freedom and responsible citizenship.
If this story is any good, the take home message is that unvetted information sources are shredding the social fabric. We can kiss Ben Franklin's aphorisms goodbye.
The appropriate precedent for study is the rise of the pamphleteer in the 18th Century.
I gather this was an era of unparalleled opinionating and quackery, both boosted by the readily available printing shop.
I am also reminded of Daniel Boorstin's study of advertising, and how really strict standards were eroded over the course of some decades.
On the other hand, of course, there's also the stranglehold of corporate TV and radio in midcentury, where centralised information control gave itself a really bad name.
Coming back to the decentralised web, the idea sounds fine in principle, and is already do-able through small, loose networks. But I'm struck by the likelihood of those cells morphing into closed, conspiratorial communities feeding their own neuroses and biases.
The extent to which that already happens on Facebook is evidence enough. The atomisation of the web might mean that giant propaganda machines cannot operate at scale, but it might also mean those machines can be the biggest fish in a planet of small ponds.
With the print media, pamphleteers and advertising, I think the worst behaviour was addressed through regulatory schemes, enforced by a government that had some basis in a civic-minded citizenry. Or something.
posted by rustipi at 3:17 PM on September 12, 2018 [3 favorites]
If this story is any good, the take home message is that unvetted information sources are shredding the social fabric. We can kiss Ben Franklin's aphorisms goodbye.
The appropriate precedent for study is the rise of the pamphleteer in the 18th Century.
I gather this was an era of unparalleled opinionating and quackery, both boosted by the readily available printing shop.
I am also reminded of Daniel Boorstin's study of advertising, and how really strict standards were eroded over the course of some decades.
On the other hand, of course, there's also the stranglehold of corporate TV and radio in midcentury, where centralised information control gave itself a really bad name.
Coming back to the decentralised web, the idea sounds fine in principle, and is already do-able through small, loose networks. But I'm struck by the likelihood of those cells morphing into closed, conspiratorial communities feeding their own neuroses and biases.
The extent to which that already happens on Facebook is evidence enough. The atomisation of the web might mean that giant propaganda machines cannot operate at scale, but it might also mean those machines can be the biggest fish in a planet of small ponds.
With the print media, pamphleteers and advertising, I think the worst behaviour was addressed through regulatory schemes, enforced by a government that had some basis in a civic-minded citizenry. Or something.
posted by rustipi at 3:17 PM on September 12, 2018 [3 favorites]
As ever, this sort of thing will fail, because the vast majority of the people want centralized products with low barrier to entry with a large user base that consolidates what they want to do into a few sites/apps, not independence or privacy. The old, decentralized internet didn't go away because of Facebook and Twitter, it's just hard to get many non-technical or privacy-centric people to use it.
There was this great old XKCD making fun of Ender's Game for suggesting that people posting content on the internet would be enough to sway public opinion but maybe, just maybe, if they'd use a fleet of bots posting on Twitter and Facebook, they could have affected an election.
This content-addressed approach makes it possible for websites and files to be stored and passed around in many ways from computer to computer rather than always relying on a single server as the one conduit for exchanging information.
That's fine and all for static content, but we already have Bit Torrent for our decentralized pirating needs. The vast majority of what people want to consume is dynamically generated each view - this doesn't solve problems of the common user and so the common user won't adopt it.
posted by Candleman at 3:23 PM on September 12, 2018 [3 favorites]
There was this great old XKCD making fun of Ender's Game for suggesting that people posting content on the internet would be enough to sway public opinion but maybe, just maybe, if they'd use a fleet of bots posting on Twitter and Facebook, they could have affected an election.
This content-addressed approach makes it possible for websites and files to be stored and passed around in many ways from computer to computer rather than always relying on a single server as the one conduit for exchanging information.
That's fine and all for static content, but we already have Bit Torrent for our decentralized pirating needs. The vast majority of what people want to consume is dynamically generated each view - this doesn't solve problems of the common user and so the common user won't adopt it.
posted by Candleman at 3:23 PM on September 12, 2018 [3 favorites]
As ever, this sort of thing will fail, because the vast majority of the people want centralized products with low barrier to entry with a large user base that consolidates what they want to do into a few sites/apps, not independence or privacy.I find this comment puzzling. In my view, “people” don’t want this, advertisers and companies looking to monetize the content do. This person (N=1) prefers the opposite. Yes, dynamic content rules the [corporate] web and since the vast majority of traffic goes to Facebook, Google, etc., dynamically generated pages represent the bulk of traffic.
But, is dynamic content better? Sure, it’s kinda gee whiz, look what I did, I have a diifferent background on the title, the news feed is updated automatically, whatever. But the dynamic part is really developed and used by Google, FB, etc, to allow them to track the user and change the ads and links on the page according to their data and algorithm. (And then also abused by ad services and malware authors.)
So whatever. The blog post, photo, or whatever you’re putting on the web - the actual content - doesn’t change. So what’s more important, the ads, monetized links, and other come hither fluff on the page or the actual content and product of the author? Plus, minimalist, static sites use less bandwidth, load quicker, and are safer than the alternative.
posted by sudogeek at 4:42 PM on September 12, 2018 [5 favorites]
Interesting, given that I haven't read it. Instead, I'm looking at the comments, and thinking other thoughts...
Some things about the web will never change.
posted by neroli at 5:04 PM on September 12, 2018 [14 favorites]
Some things about the web will never change.
posted by neroli at 5:04 PM on September 12, 2018 [14 favorites]
For several I lost access to captures of my old website hosted on a friend's server and domain, because the people who bought electricstoat.com after it expired (and who the hell would buy that) had slapped a robots.txt exclusion on the domain.
Oh man, I never even knew about that -- sorry! I let the domain lapse because I kind of forgot anyone was using it...
posted by bokane at 6:01 PM on September 12, 2018 [2 favorites]
Oh man, I never even knew about that -- sorry! I let the domain lapse because I kind of forgot anyone was using it...
posted by bokane at 6:01 PM on September 12, 2018 [2 favorites]
> As ever, this sort of thing will fail, because the vast majority of the people want centralized products with low barrier to entry with a large user base that consolidates what they want to do into a few sites/apps, not independence or privacy.
I find this comment puzzling. In my view, “people” don’t want this, advertisers and companies looking to monetize the content do.
People want centralized, connected experiences with low barriers to entry. If they didn't, Amazon and Twitter would have flopped horribly.
Any push for decentralization is moving directly against people's inclination to use whatever's easiest. It's adding complexity to a process that many already find annoyingly complex; the payoff is the vague "better personal control of data" that hasn't been enough motivation to keep Facebook from handing an election to the highest bidder.
posted by ErisLordFreedom at 6:06 PM on September 12, 2018 [6 favorites]
I find this comment puzzling. In my view, “people” don’t want this, advertisers and companies looking to monetize the content do.
People want centralized, connected experiences with low barriers to entry. If they didn't, Amazon and Twitter would have flopped horribly.
Any push for decentralization is moving directly against people's inclination to use whatever's easiest. It's adding complexity to a process that many already find annoyingly complex; the payoff is the vague "better personal control of data" that hasn't been enough motivation to keep Facebook from handing an election to the highest bidder.
posted by ErisLordFreedom at 6:06 PM on September 12, 2018 [6 favorites]
I was notified of this article, on a website not owned by the big four and a half, via an RSS feed aggregator running in a Linux VM that I browse using Firefox.
The tools already exist.
People either see value in the big sites or they don't. These articles are just by people that don't trying to convince people that do.
As el io said, the bigger problem is the EU's ridiculous legislation.
posted by krisjohn at 6:24 PM on September 12, 2018 [5 favorites]
The tools already exist.
People either see value in the big sites or they don't. These articles are just by people that don't trying to convince people that do.
As el io said, the bigger problem is the EU's ridiculous legislation.
posted by krisjohn at 6:24 PM on September 12, 2018 [5 favorites]
People "want" low barriers to entry - at least more people will use things with low barriers to entry - sort of by the definition of "barrier to entry." I don't think people care whether it's centralized or decentralized, inherently - it's just generally been easier for centralized services to deliver on the low barriers to entry. But then there are some things a lot of people are not happy about in those platforms lately - lack of privacy, proximity to *chan Nazis - that seem like they could be addressed in some respects by decentralization and building smaller communities. The trick is to make that relatively seamless.
posted by atoxyl at 6:33 PM on September 12, 2018 [4 favorites]
posted by atoxyl at 6:33 PM on September 12, 2018 [4 favorites]
It basically comes across as "we should be the ones running the internet".
I think they're going for the idea that "we should make it harder for the people running the Internet to abuse it for their own ends." Someone is always going to be "running the Internet" in the sense that there is infrastructure which needs to be built and maintained, a certain degree of regulation is necessary to curtail criminal activity and (in an ideal world we do not ourselves inhabit) systematic Internet-mediated violence ranging from doxxing and abuse to SWAT-ting, etc.
The question is whether we can have those things in a way that does not involve handing over the de jure or de facto capacity to large corporations or governments to use the resultant data to profit, oppress, open their users to the threat of massive identity theft, etc. To my knowledge the Internet Archive is not used for any of those things. The issues some people seem to have with it seem to fall more into the domain of respecting the rights of creators, which is certainly an issue, but not really connected to the others. I think tackling the latter is a valuable task in and of itself.
I'd be interested, if anyone has the insight (or links to other articles), in a technical look at the solutions they're positing (something a bit broader than "blockchain, pshaw." I know precisely nothing about the technologies in question.
As ever, this sort of thing will fail, because the vast majority of the people want centralized products with low barrier to entry with a large user base that consolidates what they want to do into a few sites/apps, not independence or privacy.
I find this comment puzzling. In my view, “people” don’t want this, advertisers and companies looking to monetize the content do.
The monetization gives those advertisers and companies the resources to offer people what they do want - i.e. the aforementioned low barriers to entry, easy access, bright shiny pictures, etc. You give the people enough of what they want for them to give you most/all of what you want - it's a model that's been working since long before the Internet.
I try to resist indulging, but I really can't pass this one by
Metafilter: an era of unparalleled opinionating and quackery
posted by AdamCSnider at 7:05 PM on September 12, 2018 [4 favorites]
I think they're going for the idea that "we should make it harder for the people running the Internet to abuse it for their own ends." Someone is always going to be "running the Internet" in the sense that there is infrastructure which needs to be built and maintained, a certain degree of regulation is necessary to curtail criminal activity and (in an ideal world we do not ourselves inhabit) systematic Internet-mediated violence ranging from doxxing and abuse to SWAT-ting, etc.
The question is whether we can have those things in a way that does not involve handing over the de jure or de facto capacity to large corporations or governments to use the resultant data to profit, oppress, open their users to the threat of massive identity theft, etc. To my knowledge the Internet Archive is not used for any of those things. The issues some people seem to have with it seem to fall more into the domain of respecting the rights of creators, which is certainly an issue, but not really connected to the others. I think tackling the latter is a valuable task in and of itself.
I'd be interested, if anyone has the insight (or links to other articles), in a technical look at the solutions they're positing (something a bit broader than "blockchain, pshaw." I know precisely nothing about the technologies in question.
As ever, this sort of thing will fail, because the vast majority of the people want centralized products with low barrier to entry with a large user base that consolidates what they want to do into a few sites/apps, not independence or privacy.
I find this comment puzzling. In my view, “people” don’t want this, advertisers and companies looking to monetize the content do.
The monetization gives those advertisers and companies the resources to offer people what they do want - i.e. the aforementioned low barriers to entry, easy access, bright shiny pictures, etc. You give the people enough of what they want for them to give you most/all of what you want - it's a model that's been working since long before the Internet.
I try to resist indulging, but I really can't pass this one by
Metafilter: an era of unparalleled opinionating and quackery
posted by AdamCSnider at 7:05 PM on September 12, 2018 [4 favorites]
Certainly we need feedback mechanisms and ways for takedown and copyright requests to be handled properly. Archive doesn't have a compliance team like YouTube's building AI to automatically detect and take down content. It doesn't have a 20,000-person moderation team like Facebook (they outsource most of it of course). And even those don't work. So the problems of unwanted duplication or bad content are so far unsolved even by the largest and most advanced tech companies in the world.
They don't work as a matter of design. YouTube has a position of treating copyright as wrong as a matter of policy - if you are a small scale creator filing a takedown request with YouTube, they will do everything in their power to drag out the process in the hopes that you'll just give up. And if you don't, YouTube will then attempt to shame you, by stating that they are filing your takedown request with the Chilling Effects database. (The only effect this has had is to destroy whatever value the Chilling Effects program may have had.)
Which leads to:
As el io said, the bigger problem is the EU's ridiculous legislation.
So, how exactly is it "ridiculous", exactly? From where I'm sitting, looking at Articles 11 and 13 shows the EU's intent - to force Alphabet to actually treat content creators fairly. Article 11 is meant to force aggregator services like Google News to have to share profits with content creators, while Article 13 is meant to deal with the above issues with services like YouTube.
posted by NoxAeternum at 7:06 PM on September 12, 2018 [1 favorite]
They don't work as a matter of design. YouTube has a position of treating copyright as wrong as a matter of policy - if you are a small scale creator filing a takedown request with YouTube, they will do everything in their power to drag out the process in the hopes that you'll just give up. And if you don't, YouTube will then attempt to shame you, by stating that they are filing your takedown request with the Chilling Effects database. (The only effect this has had is to destroy whatever value the Chilling Effects program may have had.)
Which leads to:
As el io said, the bigger problem is the EU's ridiculous legislation.
So, how exactly is it "ridiculous", exactly? From where I'm sitting, looking at Articles 11 and 13 shows the EU's intent - to force Alphabet to actually treat content creators fairly. Article 11 is meant to force aggregator services like Google News to have to share profits with content creators, while Article 13 is meant to deal with the above issues with services like YouTube.
posted by NoxAeternum at 7:06 PM on September 12, 2018 [1 favorite]
I too would love a breakdown on what's wrong with the EU legislation; I haven't looked at it much because my own country is actively devouring itself.
posted by aspersioncast at 7:12 PM on September 12, 2018
posted by aspersioncast at 7:12 PM on September 12, 2018
bokane: Don't worry about it. I got my own hosting not long after anyway. Was just kinda sad that whoever snapped up the domain decided to keep archive.org from accessing it for years.
posted by SansPoint at 7:14 PM on September 12, 2018 [3 favorites]
posted by SansPoint at 7:14 PM on September 12, 2018 [3 favorites]
I too would love a breakdown on what's wrong with the EU legislation; I haven't looked at it much because my own country is actively devouring itself.
These are pretty good breakdowns on Article 11 and Article 13.
posted by NoxAeternum at 7:18 PM on September 12, 2018 [2 favorites]
These are pretty good breakdowns on Article 11 and Article 13.
posted by NoxAeternum at 7:18 PM on September 12, 2018 [2 favorites]
I find this comment puzzling. In my view, “people” don’t want this, advertisers and companies looking to monetize the content do. This person (N=1) prefers the opposite.
Well, yes, you are an example of one while the major social media networks are measured in hundreds of thousands of users. That should tell you what "people" want. If your online handle references sudo, you are not the target demographic. My mother doesn't want to read static content for the most part, she wants to go to a site where she can see photos and updates from friends and family and communicate with them. The idea of seeding static versions of her photos so they can be served from other people's computers would be completely foreign to her.
The blog post, photo, or whatever you’re putting on the web - the actual content - doesn’t change.
What draws people to social networking is the community that it builds around that content - the likes, the comments, etc. People pretty quickly found that just posting into the void with no feedback was unsatisfying. Metafilter itself is quite dynamic and if it was just a static page that a batch script periodically updated with new links once an hour or whatever would almost certainly ceased to exist years ago.
posted by Candleman at 8:53 PM on September 12, 2018 [6 favorites]
Well, yes, you are an example of one while the major social media networks are measured in hundreds of thousands of users. That should tell you what "people" want. If your online handle references sudo, you are not the target demographic. My mother doesn't want to read static content for the most part, she wants to go to a site where she can see photos and updates from friends and family and communicate with them. The idea of seeding static versions of her photos so they can be served from other people's computers would be completely foreign to her.
The blog post, photo, or whatever you’re putting on the web - the actual content - doesn’t change.
What draws people to social networking is the community that it builds around that content - the likes, the comments, etc. People pretty quickly found that just posting into the void with no feedback was unsatisfying. Metafilter itself is quite dynamic and if it was just a static page that a batch script periodically updated with new links once an hour or whatever would almost certainly ceased to exist years ago.
posted by Candleman at 8:53 PM on September 12, 2018 [6 favorites]
Metafilter: Closed, conspiratorial communities feeding their own neuroses and biases.
(Sorry, someone had to do it.)
posted by kaibutsu at 10:09 PM on September 12, 2018 [6 favorites]
(Sorry, someone had to do it.)
posted by kaibutsu at 10:09 PM on September 12, 2018 [6 favorites]
From where I'm sitting, looking at Articles 11 and 13 shows the EU's intent
Well you know what they say about roads and hell and intentions. The EU can intend as hard as it wants but it doesn't change the fact that Google has enormous bargaining power that will be used to force content creators to cut special deals that will let it escape the link tax.
"In Germany, most publishers simply opted in to Google News without receiving a license fee. This was because Google, rather than offer to pay, simply threatened to drop German publications if they didn’t opt in. [...]
[In Spain] the law did not allow publishers to opt out and, when the law took effect, Google News (along with a variety of smaller aggregators) simply shut down. It is still shut down today.
A study paid for by publishers found that this resulted in a traffic drop between 6 and 14 percent for Spanish news sites and a loss of revenue of about €10 million ($11.7 million) in the first year."
(from the linked article [note that in the future this snippet might cost Metafilter money])
posted by Pyry at 11:07 PM on September 12, 2018 [2 favorites]
Well you know what they say about roads and hell and intentions. The EU can intend as hard as it wants but it doesn't change the fact that Google has enormous bargaining power that will be used to force content creators to cut special deals that will let it escape the link tax.
"In Germany, most publishers simply opted in to Google News without receiving a license fee. This was because Google, rather than offer to pay, simply threatened to drop German publications if they didn’t opt in. [...]
[In Spain] the law did not allow publishers to opt out and, when the law took effect, Google News (along with a variety of smaller aggregators) simply shut down. It is still shut down today.
A study paid for by publishers found that this resulted in a traffic drop between 6 and 14 percent for Spanish news sites and a loss of revenue of about €10 million ($11.7 million) in the first year."
(from the linked article [note that in the future this snippet might cost Metafilter money])
posted by Pyry at 11:07 PM on September 12, 2018 [2 favorites]
Nothing in the EU proposals would disallow this post or "virtually all of metafilter"
Article from a reputable news site rather than Cory's blog: https://www.bbc.co.uk/news/technology-45495550
posted by JonB at 12:46 AM on September 13, 2018 [5 favorites]
Article from a reputable news site rather than Cory's blog: https://www.bbc.co.uk/news/technology-45495550
posted by JonB at 12:46 AM on September 13, 2018 [5 favorites]
I hope it won't be received as needlessly contrarian if I express the wish that people would stop taking Tim Berners-Lee seriously on these matters.
I'm sure he's a very bright guy, and I'm equally certain he's had to make himself into some kind of expert on hypertextual communication networks over the past decade and a half, for social reasons if no other. But let's never forget that he devised the Web as a hack or a kludge, to address a specific local need, and that his implementation created all sorts of problems that are with us still.
Despite having run across him at more than a few high-level events over the years, where he's generally been received as having come down from Olympus for the day, I have never at any time heard him articulate an interesting idea, nor propose any scheme I regard as sound and productive. This feels consonant with that experience.
posted by adamgreenfield at 3:20 AM on September 13, 2018 [1 favorite]
I'm sure he's a very bright guy, and I'm equally certain he's had to make himself into some kind of expert on hypertextual communication networks over the past decade and a half, for social reasons if no other. But let's never forget that he devised the Web as a hack or a kludge, to address a specific local need, and that his implementation created all sorts of problems that are with us still.
Despite having run across him at more than a few high-level events over the years, where he's generally been received as having come down from Olympus for the day, I have never at any time heard him articulate an interesting idea, nor propose any scheme I regard as sound and productive. This feels consonant with that experience.
posted by adamgreenfield at 3:20 AM on September 13, 2018 [1 favorite]
It's probably worth noting, if someone hasn't already, that EU directives are designed to allow member states much in how they're actually implemented, subject to achieving roughly the desired outcome. In practice, the application of directives can sometimes be somewhat ... patchy. Individual member countries will, as they always do, aim to apply the new directives (in whatever form the finally emerge) in ways that will cause the least inconvenience to tax-avoiding mega-corporations.
posted by pipeski at 3:25 AM on September 13, 2018 [2 favorites]
posted by pipeski at 3:25 AM on September 13, 2018 [2 favorites]
Another difference is that most passwords could disappear. One of the first things you will need to use the DWeb is your own unique, secure identity, says Blockstack’s Ali. You will have one really long and unrecoverable password known only to you but which works everywhere on the DWeb and with which you will be able to connect to any decentralised app.
So to use the decentralized web, you need to use our centralized/blockchainy authentication service. No thanks, I'll stick with Malibu Stacy.
posted by RobotVoodooPower at 5:05 AM on September 13, 2018 [1 favorite]
So to use the decentralized web, you need to use our centralized/blockchainy authentication service. No thanks, I'll stick with Malibu Stacy.
posted by RobotVoodooPower at 5:05 AM on September 13, 2018 [1 favorite]
Another difference is that most passwords could disappear. One of the first things you will need to use the DWeb is your own unique, secure identity, says Blockstack’s Ali. You will have one really long and unrecoverable password known only to you but which works everywhere on the DWeb and with which you will be able to connect to any decentralised app.
One keylogger away from every single account I have being pwned? I mean, tempting and all.
posted by jaduncan at 5:10 AM on September 13, 2018 [1 favorite]
One keylogger away from every single account I have being pwned? I mean, tempting and all.
posted by jaduncan at 5:10 AM on September 13, 2018 [1 favorite]
I'm going to read the articles NoxAeternum posted as well, but the vote breakdown on that EU law makes me pretty confident it's not for the best.
posted by AnhydrousLove at 6:18 AM on September 13, 2018 [2 favorites]
posted by AnhydrousLove at 6:18 AM on September 13, 2018 [2 favorites]
As others have said, it seems that the technologies to decentralize content hosting exist and the challenge is putting them together in a way that is simple enough for the average person to use them. I think enough people are concerned about giant corporations knowing our personal details and controlling content distribution that they would be interested in an alternative, but it has to be dead simple and user friendly. The problem is “user-friendly” and “has the right functionality (i.e.,does the things people want)” exist on a continuum and are often at odds (the more features, often the harder it is to use or at least to find the one function you want).
It seems that you'd want to start with a relatively simple (aka, dumb) implementation, maybe focused only on static content. After all that's how the web started, and all of the interactivity is built on top of that simple foundation. One option at least for static content is for people to run their own web servers at home or hosted somewhere. Even if we made web servers much simpler, that seems like a non-starter because the capability is already there and very few people take advantage of it.
Instead I'm imagining a BitTorrent-like system. Each person gets an identifier, and each piece of content get something like a UUID but probably much longer because the number of possible pieces of content is basically infinite. But this is all automatically generated and invisible to the user. You find people using simple things like usernames and details they choose to share publicly, and find content by content titles, content, and metadata. Like BitTorrent, once you connect to someone they can point you to other people for connections and content. When you identify a piece of content you want (likey by a new breed of search engine or a curated list by a contact), your machine puts out feelers to all of your connections and possibly their connections, increasing the breath of the search until the content with that super UUID is found or until some kind of time to live/maximum hops expires. Once you consume a piece of information you store it and can serve it up to others.
So from an end user perspective, you just create a username and password (password doesn't really get stored anywhere, it just signs your private key but you don't have to know that) and then connect with at least one node. But the user doesn't know it's a node in a network, they know it as a person or organization they trust like “my friend Jill” or “MetaFilter.” Once you're connected to at least 1 trusted source, they're your gateway to finding people and content, and all the nodes would automatically share connections to keep the network working the same way BitTorrent shares IPs with the desired data. There, now you have a network that can distribute simple statuc data like text without a central server. Functionality to string these chunks together into larger things like a chain of comments can be built into the consuming apps, not into the system itself. Each new person just starts with one trusted contact, and it’s as simple as username / password / add friend.
Centralization would be built on this almost immediately, though. You're not going to constantly monitor and block spammers and Nazis, that would be exhausting and make it a really undesirable experience. A trusted person or community would publish whitelists or blacklists and most users would accept those automatically, the same way people publish blacklists for ad blockers. I know I'd trust the unofficial MetaFilter list.
But they key is, this is a very limited use case. All this system solves is “how so we get hosting out of the hands of a few big entities so information can be published without them?” It doesn't provide any real privacy controls except “don't put private info on the network.” Decentralizing publishing could provide a lot of very cool benefits, but there are tons of problems on the internet that absolutely cannot be solved simply by decentralisation: stolen content so content creators don’t get paid; doxxing; harassment; misinformation; hate speech. So even if some kind of decentralization scheme happens, it’s a modest change that would help make information easier to disseminate without a central authority – and that’s all.
posted by Tehhund at 7:19 AM on September 13, 2018 [3 favorites]
It seems that you'd want to start with a relatively simple (aka, dumb) implementation, maybe focused only on static content. After all that's how the web started, and all of the interactivity is built on top of that simple foundation. One option at least for static content is for people to run their own web servers at home or hosted somewhere. Even if we made web servers much simpler, that seems like a non-starter because the capability is already there and very few people take advantage of it.
Instead I'm imagining a BitTorrent-like system. Each person gets an identifier, and each piece of content get something like a UUID but probably much longer because the number of possible pieces of content is basically infinite. But this is all automatically generated and invisible to the user. You find people using simple things like usernames and details they choose to share publicly, and find content by content titles, content, and metadata. Like BitTorrent, once you connect to someone they can point you to other people for connections and content. When you identify a piece of content you want (likey by a new breed of search engine or a curated list by a contact), your machine puts out feelers to all of your connections and possibly their connections, increasing the breath of the search until the content with that super UUID is found or until some kind of time to live/maximum hops expires. Once you consume a piece of information you store it and can serve it up to others.
So from an end user perspective, you just create a username and password (password doesn't really get stored anywhere, it just signs your private key but you don't have to know that) and then connect with at least one node. But the user doesn't know it's a node in a network, they know it as a person or organization they trust like “my friend Jill” or “MetaFilter.” Once you're connected to at least 1 trusted source, they're your gateway to finding people and content, and all the nodes would automatically share connections to keep the network working the same way BitTorrent shares IPs with the desired data. There, now you have a network that can distribute simple statuc data like text without a central server. Functionality to string these chunks together into larger things like a chain of comments can be built into the consuming apps, not into the system itself. Each new person just starts with one trusted contact, and it’s as simple as username / password / add friend.
Centralization would be built on this almost immediately, though. You're not going to constantly monitor and block spammers and Nazis, that would be exhausting and make it a really undesirable experience. A trusted person or community would publish whitelists or blacklists and most users would accept those automatically, the same way people publish blacklists for ad blockers. I know I'd trust the unofficial MetaFilter list.
But they key is, this is a very limited use case. All this system solves is “how so we get hosting out of the hands of a few big entities so information can be published without them?” It doesn't provide any real privacy controls except “don't put private info on the network.” Decentralizing publishing could provide a lot of very cool benefits, but there are tons of problems on the internet that absolutely cannot be solved simply by decentralisation: stolen content so content creators don’t get paid; doxxing; harassment; misinformation; hate speech. So even if some kind of decentralization scheme happens, it’s a modest change that would help make information easier to disseminate without a central authority – and that’s all.
posted by Tehhund at 7:19 AM on September 13, 2018 [3 favorites]
I don't think the problem of government censorship is either articulated or solved with decentralization. There will always be undercover agents to worry about with total anonymity, and one's home computer is off the table for storing content in a hostile regime. I don't see how the blockchain helps, and they are currently used by the worst governments to spread fake news and ransomware. If memory serves, decentralization was a strategy buzzword originating to thwart DDOS attacks, which themselves have become decentralized. I would like to read more about the assumptions of decentralization in terms that don't appear to be bluffing for bitcoin.
posted by Brian B. at 7:39 AM on September 13, 2018 [3 favorites]
posted by Brian B. at 7:39 AM on September 13, 2018 [3 favorites]
I've written about this elsewhere, so please forgive some copypasta, but: I think that one basic problem here is the misconception that federation is a feature of distributed systems. I’m pretty confident that it’s not; specifically, I believe that federated systems are a byproduct of computational scarcity. The short version of my argument is that federation was something you needed if you wanted a globally-usable system in a world where the rare, not-very-good computers of the world cost five million dollars each, and that is just not how the world works anymore.
I also think that it is a mistake to conflate distributed or federated systems with accountable systems. Mastodon, fun as some of its instances are, are only accountable per-instance and a bunch of those instances are festering boils. Which at least offers some containment, and that's a definitely not nothing, but it is not the same as accountability. The reason Mastodon is popular in otaku circles, for example, is the quantity of content being recirculated there that is so illegal that even Twitter can't bring their negligent, head-the sand policy team to ignore it.
In contrast, the core problem with Twitter isn't that Twitter is centralized, it's that Twitter refuses to police its users in any meaningful way until they hurt a revenue stream's feelings. Centralization or decentralization is irrelevant to the question; the problem is that Twitter is wholly unaccountable, declaring themselves not in any way liable for the bad actors they tolerate or amplify.
Putting aside the cartoonishly retrograde ways this article discusses identity and security - which are, to be clear, criminally bad, doing the absolute worst of what people accuse Valley engineers of doing by glossing over complex social and technical implementations by just declaring them simple and ignoring the human consequences of their implementations - I think the biggest problem underpinning this whole idea that it's fundamentally in wanting the world to be like it was in the early 90s, and that world's not coming back.
posted by mhoye at 7:49 AM on September 13, 2018 [7 favorites]
I also think that it is a mistake to conflate distributed or federated systems with accountable systems. Mastodon, fun as some of its instances are, are only accountable per-instance and a bunch of those instances are festering boils. Which at least offers some containment, and that's a definitely not nothing, but it is not the same as accountability. The reason Mastodon is popular in otaku circles, for example, is the quantity of content being recirculated there that is so illegal that even Twitter can't bring their negligent, head-the sand policy team to ignore it.
In contrast, the core problem with Twitter isn't that Twitter is centralized, it's that Twitter refuses to police its users in any meaningful way until they hurt a revenue stream's feelings. Centralization or decentralization is irrelevant to the question; the problem is that Twitter is wholly unaccountable, declaring themselves not in any way liable for the bad actors they tolerate or amplify.
Putting aside the cartoonishly retrograde ways this article discusses identity and security - which are, to be clear, criminally bad, doing the absolute worst of what people accuse Valley engineers of doing by glossing over complex social and technical implementations by just declaring them simple and ignoring the human consequences of their implementations - I think the biggest problem underpinning this whole idea that it's fundamentally in wanting the world to be like it was in the early 90s, and that world's not coming back.
posted by mhoye at 7:49 AM on September 13, 2018 [7 favorites]
My contrarian hope is that that Article 13 leads to a resurgence of interest in good 'ol P2P. I think the reality is that it probably just shifts the balance between Google and Spotify though and that's - whatever.
posted by atoxyl at 12:11 PM on September 13, 2018 [1 favorite]
posted by atoxyl at 12:11 PM on September 13, 2018 [1 favorite]
I remember the heady days of the early 2000s when we thought giving power to the people through the decentralization of information was going to end tyranny.
But instead Web 3.0 will be permanent, undeletable revenge porn and Russian misinformation campaigns. Way to go nerds of the world!
posted by Skwirl at 12:23 PM on September 13, 2018 [4 favorites]
But instead Web 3.0 will be permanent, undeletable revenge porn and Russian misinformation campaigns. Way to go nerds of the world!
posted by Skwirl at 12:23 PM on September 13, 2018 [4 favorites]
Revenge porn and deepfakes.
posted by rustipi at 2:00 PM on September 13, 2018 [1 favorite]
posted by rustipi at 2:00 PM on September 13, 2018 [1 favorite]
AT&T, Verizon, T-Mobile & Sprint Want Even Broader Access To Your Personal Data
posted by homunculus at 4:46 PM on September 14, 2018
posted by homunculus at 4:46 PM on September 14, 2018
MeTa on EU copyright directive as would/might pertain to MeFi.
posted by progosk at 4:25 AM on September 17, 2018
posted by progosk at 4:25 AM on September 17, 2018
« Older When Will It Be Time’s Up for Motherhood and... | "We're gonna treat it like a normal landing... Newer »
This thread has been archived and is closed to new comments
posted by el io at 12:41 PM on September 12, 2018 [20 favorites]