One of such subs where this happened is /r/pics. Their counter was to limit allowed content to pics of John Oliver, based on a poll among the users. It will be hard for reddit to justify another take on the moderators here.
Anyway, as moderators got removed, other subs decided to extend their protest to an unspecified date, some want to keep going until reddit takes back their changes - or remain dark forever.
/r/Hitman extended their protest too, I donât know for how long but according to one of the mods (@Fuzk), any time frame is possible.
I think its pretty crazy that within the past year â even pretty recently â weâve seen companies like
Twitter
Twitch
Discord
Reddit
(Iâm sure Iâm missing a handful)
Change the way their platform works in unfortunate anti-consumer & for-profit ways, even bowing down to forcibly toying with AI with their customer base, and ends up drastically changing the way the platform works for some people.
I am so glad I donât use most of that stuff. I have a Facebook account to keep up with family but thatâs about it for social media except occasionally watching Twitch. I wasnât aware they had changed anything though, which may tell you how often I actually use it.
In a surprising last-minute amendment to the otherwise innocuous âCourts and Civil Law (Miscellaneous Provisions) Bill 2022â from September 2022 the Irish Government added a provision that would allow the Irish DPC to declare almost all its procedures âconfidentialâ. Section 26A would make most reporting about procedures or decisions by the DPC a crime. Speaking about outlandish claims by âbig techâ or unfair procedures that often concern millions of users would equally become a crime.
For context, the Irish DPC is known to be super lax at enforcing European privacy laws which is why the European subsidiary of Meta (Facebook) is located there. Probably other big techs are there for the same reason.
That canât be legal⌠Google doesnât own the internetâŚ
They own something that can pull up a list of things on the internet, but they donât own themâŚ
It is the zeitgeist for AI to scrape everything available regardless of ownerships.
Worth noting that in the EU, where there are at least some privacy laws, Google did not mention these things in their policy.
Oh while we are at this, guess what else is absolutely incompatible to EU laws? Metaâs Twitter clone.
It is amazing how UK is able to surpass the worst dystopian works with their surveillance programs.
The existing IPA regime appears to already allow the U.K. government to demand that companies alter their services in a manner that may affect all users. For example, a technical capability notice requiring the âremoval by a relevant operator of electronic protectionâ could be used to force a service, such as WhatsApp or Signal, to remove or undermine the end-to-end encryption of the services it provides worldwide, if the government considers that such a measure is proportionate to the aim sought.
Device manufacturers would likely also have to notify the government before making available important security updates that fix known vulnerabilities and keep devices secure. Accordingly, the Secretary of State, upon receiving such an advance notice, could now request operators to, for instance, abstain from patching security gaps to allow the government to maintain access for surveillance purposes.
More importantly, expanding the extraterritorial effects of the notices regimes would entitle the U.K. government to decide the fate of data privacy and security for virtually every citizen in the world. For example, a notice asking operators to undermine end-to-end encryption would mean that end-to-end encryption would also be weakened for citizens in states with authoritarian regimes and a weak rule of law.
The government, however, has said the bill does not ban end-to-end encryption.
Instead it will require companies to take action to stop child abuse on their platforms and as a last resort develop technology to scan encrypted messages, it has said.
Tech companies have said scanning messages and end-to-end encryption are fundamentally incompatible.
As some know, this forum has a critical stance regarding posting AI generated images. But obviously large companies are investing much in this field, not only for image generators but other stuff as well.
Here is a handy list of statements why copyrighted stuff that is used to train AI should not be paid or even asked to be used by the companies:
I take a much, much harder stance that most people I know on the topic, but I do agree that using other peopleâs work to âtrainâ large language models is a violation of copyright. I think itâs completely natural for the companies developing these programs to argue against paying for that content - it means they can make less money and, letâs face it, if they didnât think they could make money with large language models, they wouldnât be doing it in the first place.