Since my original report about all this came out last week, I've interviewed people involved in running the 2025 Seattle Worldcon to pull together a clearer picture of what exactly happened. Based on these interviews, it appears the use of ChatGPT was indeed limited to the vetting of panelists. However, I also learned the use of ChatGPT was not initially approved by Worldcon leadership. Instead, a lower-level Worldcon volunteer decided on their own to use the generative AI program. This was done in the belief the program could complete a time-intensive project when there were not enough volunteers to complete the job manually.
From last year's debacle
Tumblr and WordPress.com are preparing to sell user data to Midjourney and OpenAI, according to a source with internal knowledge about the deals and internal documentation referring to the deals.
UGH also bottom of the article says Reddit is also selling user data to AI companies
A huge blocklist of manually curated sites (1000+) that contain AI generated content, for the purposes of cleaning image search engines (Google Search, DuckDuckGo, and Bing) with uBlock Origin or uBlacklist.
AI MUST DIE is a short zine that presents critical perspectives on the way that AI is talked about, governed, and owned in 2025.
Materials here are intended as solidarity solace for educators who might find themselves inventing wheels alone while their administrators, trustees, and bosses unrelentingly hype AI and nakedly enthuse the negative consequences for educator labor.
There's a meme page!
This is not to say that I never use GPS systems, but I try to minimize my use — using them only when absolutely necessary — because becoming dependent on them causes the parts of your brain that do that work to atrophy. Literally.
Whoops, never thought of that before! Like, I know using AI to write makes me a worse writer, but using GPS directions in Google Maps also makes me worse at navigating? It makes sense in retrospect. And with the GPS I never have to develop the skills to navigate in the first place. Yikes!
Today, it may seem to many that the cluster of technologies marketed as “AI” is entirely new, and, logically, that objection to it must likewise be unheard-of. But, as the demonstration shows, not only is “AI” not especially new; protesting it has a long history. [...] [W]e are calling for resistance to the AI industry’s ongoing capture of higher education.
We envision a resistance that is, by its very nature, a repudiation of the efficiencies that automated algorithmic education falsely promises: a resistance comprising the collective force of small acts of friction.
But what I vehemently object to in this situation is the use of the first-person voice without my review or permission. The language used in the description makes it sound as if I wrote it (“In this post, I share my personal journey…”). Because I have fiercely protected my authorship throughout my life and what my name is attached to, any generative AI writing that purports to be in my voice without my informed consent is a profound violation of my authorial voice, agency, and frankly it feels like fraud or impersonation. As an archivist who has spent almost twenty years thinking about accuracy in information, it makes my skin crawl that there is a metadata field with the sole purpose of generating SEO-engagement purporting to be my voice that doesn’t disclose the authorship was actually non-consensual AI.
I fucking hate the Tech Bros. I hate the hype. I hate the Bros wrongly claiming LLM's will turn us all into toast. I hate their never-ending quest to make their investments have a return. I hate the venture capitalists in their Patagonia vests who talk about "disruption" while they burn down the library of human experience and fuck over workers. I hate them with the specific, intricate hatred of a survivor who knows exactly how the grift works.
I hate LLMs. My hatred knows no bounds. I love the small web, the clean web. I hate tech bloat.
And LLMs are the ultimate bloat.
But as PL points out, Ask This Book is, in effect, “an in-book chatbot. You ask any question about the book, and a generative AI process provides you answers.” Which would seem…hmmm…to raise some rights concerns.
UGH, really??
Every time there's news from Mozilla we see a lot of takes around here along the lines of: they're clueless, their heads are in the sand, they don't know their userbase.
Alternative interpretation: We're looking at another case very much like Bluesky - a corporation with somewhat-openwashed branding which knows exactly who their userbase is, hates it, and wants a different one.
The rationale is clear enough; the browser is just a massive opportunity for datamining. The "AI" startups can only dream of controlling a browser with even the marketshare of Firefox. In that light, it's no use having a userbase of technically competent, privacy-aware dissidents who can work around the extractive dark patterns. Let's face it people, we're not profitable to surveillance capitalism
“I felt like Jason and the mods cared more about Claude than the welcoming community they built. Considering Jason is the owner of the server, I wouldn't trust him to be able to put the community first before putting AI first,” ML told 404 Media.
(Need free account to read full article or else here: https://archive.is/Ypur6)
There is a tedious point that advocates of AI art will periodically articulate to the effect of AI rendering art accessible to more people—ones lacking in time or ability to otherwise produce it. The response to this is generally that the time and labor involved is fundamental to art. But even more fundamental is the thought involved. At the end of the day what defines art is the existence of intention behind it—the fact that some consciousness experienced thoughts that it subsequently tried to communicate. Without that there’s simply lines on paper, splotches of color, and noise. At the risk of tautology, meaning exists because people mean things. Nobody else is going to do that work for us. If we don’t do it, really, what’s the fucking point?
The issues and harms surrounding emerging technologies are especially concerning given the lack of regulation in the tech industry generally, and the tendency of productivity-increasing technology to further concentrate power in the hands of the few. This reading group will explore these risks and engage with how they work in the hopes of better organizing to protect the rights of workers and individuals. The goal is to have a better understanding of the costs (data, carbon, human labor) and risks (misinformation, unpredictability, bias) of making these machines, as well as limitations in what they can learn about the world primarily through data scraped from the internet.
Excellent book list if you're interested in the topic(s).
Tracking the use of generative AI by fascists and adjacent forces - CONTENT WARNING: distressing imagery in many flavours: violent, offensive, racist, supremacist, etc
Appends udm=14 to the Google search URL to instruct Google to only return the Web results. There is even an extension and website built specifically for this use case. This is suitable for user who would like to have a clean search results without the clutter from Knowledge Graph, Local Results, Related Questions and etc. Now, there are others udm=x than just 14. I have been doing some searching and compile all of them here.
Save this to a custom search string in Firefox/whatever browser and you can automatically use it to search for things in Google without the AI nonsense coming up.