What should we conclude from those two extraneous dashes....
Nice article, though. Thanks.
They were sales people, and part of the pitch was getting the buyer to come to a particular idea "all on their own" then make them feel good on how smart they were.
The other funny thing on EM dashes is there are a number of HN'ers that use them, and I've seen them called bots. But when you dig deep in their posts they've had EM dashes 10 years back... Unless they are way ahead of the game in LLMs, it's a safe bet they are human.
These phrases came from somewhere, and when you look at large enough populations you're going to find people that just naturally align with how LLMs also talk.
This said, when the number of people that talk like that become too high, then the statistical likelihood they are all human drops considerably.
I can usually tell when someone is leading like this and I resent them for trying to manipulate me. I start giving the opposite answer they’re looking for out of spite.
I’ve also had AI do this to me. At the end of it all, I asked why it didn’t just give me the answer up front. It was a bit of a conspiracy theory, and it said I’d believe it more if I was lead there to think I got there on my own with a bunch of context, rather than being told something fairly outlandish from the start. That fact that AI does this to better reinforce the belief in conspiracy theories is not good.
Here's my list of current Claude (I assume) tics:
The API protest in 2023 took away tools from moderators. I noticed increased bot activity after that.
The IPO in 2024 means that they need to increase revenue to justify the stock price. So they allow even more bots to increase traffic which drives up ad revenue. I think they purposely make the search engine bad to encourage people to make more posts which increases page views and ad revenue. If it was easy to find an answer then they would get less money.
At this point I think reddit themselves are creating the bots. The posts and questions are so repetitive. I've unsubscribed to a bunch of subs because of this.
I don't care how aggressive this sounds; name and shame.
Huffman should never be allowed to work in the industry again after what he and others did to Reddit (as you say, last bastion of the internet)
Zuckerberg should never be allowed after trapping people in his service and then selectively hiding posts (just for starters. He's never been a particularly nice guy)
Youtube and also Google - because I suspect they might share a censorship architecture... oh, boy. (But we have to remove + from searches! Our social network is called Google+! What do you mean "ruining the internet"?)
Wasn't that functionality just replaced? Parts of a query that are in quotation marks, are required to appear in any returned result.
I'm really starting to wonder how much of the "ground level" inflation is actually caused by "passing on" the cost of anti-social behaviors to paying customers, as opposed to monetary policy shenanigans.
they have definitely made reddit far worse in lots of ways, but not this one.
"Latest" ignores score and only sorts by submission time, which means you see a lot of junk if you follow any large subreddits.
The default home-page algorithm used to sort by a composite of score, recency, and a modifier for subreddit size, so that posts from smaller subreddits don't get drowned out. It worked pretty well, and users could manage what showed up by following/unfollowing subreddits.
These are some pretty niche communities with only a few dozen comments per day at most. If Reddit becomes inhospitable to them then I'll abandon the site entirely.
Isn't that just fraud?
At the moment I am on a personal finance kick. Once in awhile I find myself in the bogleheads Reddit. If you don’t know bogleheads have a cult-like worship of the founder of vanguard, whose advice, shockingly, is to buy index funds and never sell.
Most of it is people arguing about VOO vs VTI vs VT. (lol) But people come in with their crazy scenarios, which are all varied too much to be a bot, although the answer could easily be given by one!
Adding the option to hide profile comments/posts was also a terrible move for several reasons.
But recently it seems everything is more overrun than usual with bot activity, and half of the accounts are hidden which isn't helping matters. Utterly useless, and other platforms don't seem any better in this regard.
When are people who buy ads going to realize that the majority of their online ad spend is going towards bots rather than human eye balls who will actually buy their product? I'm very surprised there hasn't been a massive lawsuite against Google, Facebook, Reddit, etc. for misleading and essentially scamming ad buyers
Innovation outside of rich coorps will end. No one will visit forums, innovation will die in a vacuum, only the richest will have access to what the internet was, raw innovation will be mined through EULAs, people striving to make things will just have ideas stolen as a matter of course.
Like wearing a mask on one's head to ward tigers.
Ignore the search engines, ignore all the large companies and you're left with the "Old Internet". It's inconvenient and it's hard work to find things, but that's how it was (and is).
https://en.wikipedia.org/wiki/Yahoo#Founding
The original Yahoo doesn't exist (outside archive.org), but I'm guessing would be a keen person or two out there maintaining a replacement. It would probably be disappointing, as manually curated lists work best when the curator's interests are similar to your own.
What you want might be Kagi Search with the AI filtering on? I've never used Kagi, so I could be off with that suggestion.
In the other hand, that we can't tell don't speak so good about AIs as speak so bad about most of our (at least online) interaction. How much of the (Thinking Fast and Slow) System 2 I'm putting in this words? How much is repeating and combining patterns giving a direction pretty much like a LLM does? In the end, that is what most of internet interactions are comprised of, done directly by humans, algorithms or other ways.
There are bits and pieces of exceptions to that rule, and maybe closer to the beginning, before widespread use, there was a bigger percentage, but today, in the big numbers the usage is not so different from what LLMs does.
1. Text is a very compressed / low information method of communication.
2. Text inherently has some “authority” and “validity”, because:
3. We’ve grown up to internalize that text is written by a human. Someone spend the effort to think and write down their thoughts, and probably put some effort into making sure what they said is not obviously incorrect.
Intimately this ties into LLMs on text being an easier problem to trick us into thinking that they are intelligent than an AI system in a physical robot that needs to speak and articulate physically. We give it the benefit of the doubt.
I’ve already had some odd phone calls recently where I have a really hard time distinguishing if I’m talking to a robot or a human…
One consequence, IMHO, is that we won't value long papers anymore. Instead, we will want very dense, high-bandwidth writing that the author stakes consequences (monetary, reputational, etc.) on its validity.
on the other hand
To pass the Turing test the AI would have to be indistinguishable from a human to the person interrogating it in a back and forth conversation. Simply being fooled by some generated content does not count (if it did, this was passed decades ago).
No LLM/AI system today can pass the Turing test.
Most of them come across to me like they would think ELIZA passes it, if they weren't told up front that they were testing ELIZA.
That is, the main thing that makes it possible to tell LLM bots apart from humans is that lots of us have over the past 3 years become highly attuned to specific foibles and text patterns which signal LLM generated text - much like how I can tell my close friends' writing apart by their use of vocabulary, punctuation, typical conversation topics, and evidence (or lack) of knowledge in certain domains.
Think of the children!!!
Most people probably don't know, but I think on HN at least half of the users know how to do it.
It sucks to do this on Windows, but at least on Mac it's super easy and the shortcut makes perfect sense.
Show HN: Hacker News em dash user leaderboard pre-ChatGPT - https://news.ycombinator.com/item?id=45071722 - Aug 2025 (266 comments)
... which I'm proud to say originated here: https://news.ycombinator.com/item?id=45046883.
I'm safe. It must be one of you that are the LLM!
(Hey, I'm #21 on the leaderboard!).
I will still sometimes use a pair of them for an abrupt appositive that stands out more than commas, as this seems to trigger people's AI radar less?
Hyphen (-) — the one on your keyboard. For compound words like “well-known.”
En dash (–) — medium length, for ranges like 2020–2024. Mac: Option + hyphen. Windows: Alt + 0150.
Em dash (—) — the long one, for breaks in thought. Mac: Option + Shift + hyphen. Windows: Alt + 0151.
And now I also understand why having plenty of actual em-dashes (not double hyphens) is an “AI tell”.
En dash is compose --.
You can type other fun things like section symbol (compose So) and fractions like ⅐ with compose 17, degree symbol (compose oo) etc.
https://itsfoss.com/compose-key-gnome-linux/
On phones you merely long press hyphen to get the longer dash options.
That said I always use -- myself. I don't think about pressing some keyboard combo to emphasise a point.
Might as well be yourself.
It lets users type all sorts of ‡s, (*´ڡ`●)s, 2026/01/19s, by name, on Windows, Mac, Linux, through pc101, standard dvorak, your custom qmk config, anywhere without much prior knowledge. All it takes is to have a little proto-AI that can range from floppy sizes to at most few hundred MBs in size, rewriting your input somewhere between the physical keyboard and text input API.
If I wanted em–dashes, I can do just that instantly – I'm on Windows and I don't know what are the key combinations. Doesn't matter. I say "emdash" and here be an em-dash. There should be the equivalent to this thing for everybody.
To that end, I think people will work on increasingly elaborate methods of blocking AI scrapers and perhaps even search engine crawlers. To find these sites, people will have to resort to human curation and word-of-mouth rather than search.
Of course, if (big if) it does end up being large enough, the value of getting an invite will get to a point where a member can sell access.
It may be great right now but the users do not control their own destinies. It looks like there are tools users can use to export their data but if Discord goes the enshittification route they could preemptively block such tools, just as Reddit shut down their APIs.
I cant do reddit anymore, it does my head in. Lemmy has been far more pleasant as there is still good posting etiquette.
For licensed professions, they have registries where you can look people up and confirm their status. The bot might need to carry out a somewhat involved fraud if they're checking.
Also on subreddits functioning as support groups for certain diseases, you'll see posts that just don't quite add up, at least if you know somewhat about the disease (because you or a loved one have it). Maybe they're "zebras" with a highly atypical presentation (e.g., very early age of onset), or maybe they're "Munchies." Or maybe LLMs are posting their spurious accounts of their cancer or neurdegenerative disease diagnosis, to which well-meaning humans actually afflicted with the condition respond (probably along side bots) with their sympathy and suggestions.
Social Media is become the internet and/or vice-versa.
Also, I think you're objectively wrong in this statement:
"the actual function of this website is, which is to promote the views of a small in crowd"
Which I don't think was the actual function of (original) social media either.
I know this was a throwaway parenthetical, but I agree 100%. I don't know when the meaning of "social media" went from "internet based medium for socializing with people you know IRL" to a catchall for any online forum like reddit, but one result of this semantic shift is that it takes attention away from the fact that the former type is all but obliterated now.
Discord is the 9,000lb gorilla of this form of social media, and it's actually quietly one of the largest social platforms on the internet. There's clearly a desire for these kinds of spaces, and Discord seems to be filling it.
While it stinks that it is controlled by one big company, it's quite nice that its communities are invite-only by default and largely moderated by actual flesh-and-blood users. There's no single public shared social space, which means there's no one shared social feed to get hooked on.
Pretty much all of my former IRC/Forum buddies have migrated to Discord, and when the site goes south (not if, it's going to go public eventually, we all know how this story plays out), we expect that we'll be using an alternative that is shaped very much like it, such as Matrix.
Discord is many things. Private chat groups, medium communities and then larger communities with tens of thousands of users.
So what's wrong with that?
The "former type" had to do with online socializing with people you know IRL.
I have never seen anything on Discord that matches this description.
In fact, I'd say it's probably the easiest way to bootstrap a community around a friend-group.
The other part of this is that Discord has official University hubs, so the college kids are all in there. You need an email address from that Univeristy to join: https://support.discord.com/hc/en-us/articles/4406046651927-...
It's similar in Apple's strategy of trying to get Macintosh into the classrooms (in the 80s/90s), and student discounts on Adobe products.
I am not a huge fan of Discord, although I do use it. It's very good at what it does, and the communities it houses are well moderated, at least the ones that I have joined. I dislike that they've taken over communities and walled them off from the "searchable" internet.
I'm in a friend Discord server. It's naturally invisible unless someone sends you an invite.
And I know server like these are in the top tier of engagement for discord on the whole because they keep being picked for AB testing new features. Like, we had activities some half a year early. We actually had the voice modifiers on two of them, and most people don't even know that was a thing.
But, the “know IRL” split is a bit artificial I think. For example my discord is full of people I knew in college: I knew them IRL for four years, and then we all moved around and now we’ve known each other online for decades. Or childhood friends. By now, my childhood friend and college friend circles are partially merged on discord, and they absolutely know each other (unfortunately there’s no way for you to evaluate this but I know them all quite well and it would be absurd to me, to consider them anything other than friends).
The internet is part of the real world now. People socialize on it. I can definitely see a distinction between actually knowing somebody, and just being in a discord channel with them. But it is a fuzzy social thing I think, hard to nail down exactly where the transition is (also worth noting that we have acquaintances that we don’t really “know” offline, the cashier at our favorite shops for example).
"Social media" never meant that. We've forgotten already, but the original term was "social network" and the way sites worked back then is that everyone was contributing more or less original content. It would then be shared automatically to your network of friends. It was like texting but automatically broadcast to your contact list.
Then Facebook and others pivoted towards "resharing" content and it became less "what are my friends doing" and more "I want to watch random media" and your friends sharing it just became an input into the popularity algorithm. At that point, it became "social media".
HN is neither since there's no way to friend people or broadcast comments. It's just a forum where most threads are links, like Reddit.
Let's remember that the original idea was to connect with people in your college/university. I faintly recall this time period because I tried to sign up for it only to find out that while there had been an announcement that it was opened up internationally, it still only let you sign up with a dot EDU email address, which none of the universities in my country had.
In the early years "social media" was a lot more about having a place to express yourself or share your ideas and opinions so other people you know could check up on them. Many remember the GIF anarchy and crimes against HTML of Geocities but that aesthetic also carried over to MySpace while sites like Live Journal or Tumblr more heavily emphasized prose. This was all also in the context of a more open "blogosphere" where (mostly) tech nerds would run their own blogs and connect intentionally much like "webrings" did in the earlier days for private homepages and such before search engine indexing mostly obliterated their main use.
Facebook pretty much created modern "social media" by creating the global "timeline", forcing users to compete with each other (and corporate brands) for each other's attention while also focusing the experience more on consumption and "reaction" than creation and self-expression. This in turn resulted in more "engagement" which eventually led to algorithmic timelines trying to optimize for engagement and ad placement / "suggested content".
HN actually follows the "link aggregator" or "news aggregator" lineage of sites like Reddit, Digg, Fark, etc (there were also "bookmark aggregators" like stumbleupon but most of those died out to). In terms of social interactions it's more like e.g. the Slashdot comment section even though the "feed" is somewhat "engagement driven" like on social media sites. But as you said, it lacks all the features that would normally be expected like the ability to "curate" your timeline (or in fact, having a personalized view of the timeline at all) or being able to "follow" specific people. You can't even block people.
"Social Media" had become a euphemism for 'scrolling entertainment, ragebait and cats' and has nothing to do 'being social'. There is NO difference between modern reddit and facebook in that sense. (Less than 5% of users are on old.reddit, the majority is subject to the algorithm.)
Better back button handling and fixing the location bugs in event creation may well be entirely beyond Llama, sadly.
you can also invite a music bot or host your own that will join the voice channel with a song that you requested
The big thing is the voice/videoconferencing channels which are actually optimized insanely well, Discord calls work fine even on crappy connections that Teams and Zoom struggle with.
Simply put it's Skype x MSN Messenger with a global user directory, but with gamers in mind.
When we get to alternative proposals with functioning calls I'd say having them as voice channels that just exist 24/7 is a big thing too. It's a tiny thing from technical perspective, but makes something like Teams unsuitable alternative for Discord.
In Teams you start a call and everyone phone rings, you distract everyone from whatever they were doing -- you better have a good reason for doing so.
In Discord you just join empty voice channel (on your private server with friends) w/o any particular reason and go on with your day. Maybe someone sees that you're there and joins, maybe not. No need to think of anyone's schedule, you don't annoy people that don't have time right now.
Until you join a server that gives you a whole essay of what you can and cannot do with extra verification. This then requiring you to post in some random channel waiting for the moderator to see your message.
You're then forced to assign roles to yourself to please a bot that will continue to spam you with notifications announcing to the community you've leveled up for every second sentence. Finally, everyone glaring at you in channel or leaving you on read because you're a newbie with a leaf above your username. Each to their own, I guess.
/server irc.someserver.net
/join #hello
/me says Hello
I think I'll stick with that.
At least Discord and IRC are interchangeable in the sake of idling.
1. People don't understand or want to setup a client that isn't just loading some page in their browser 2. People want to post images and see the images they posted without clicking through a link, in some communities images might be shared more than text. 3. People want a persistent chat history they can easily access from multiple devices/notifications etc 4. Voice chat, many IRC communities would run a tandem mumble server too.
All of these are solvable for a tech-savvy enough IRC user, but Discord gets you all of this out of the box with barely more than an email account.
There are probably more, but these are the biggest reasons why it felt like within a year I was idling in channels by myself. You might not want discord but the friction vs irc was so low that the network effect pretty much killed most of IRC.
5 minutes after the first social network became famous. It never really has been just about knowing people IRL, that was only in the beginning, until people started connecting with everyone and their mother.
Now it's about people and them connecting and socializing. If there are persons, then it's social. HN has profiles where you can "follow" people, thus, it's social on a minimal level. Though, we could dispute whether it's just media or a mature network. Because there obviously are notable differences in terms of social-related features between HN or Facebook.
I’ve definitely been reducing my day-to-day use of em-dashes the last year due to the negative AI association, but also because I decided I was overusing them even before that emerged.
This will hopefully give me more energy for campaigns to champion the interrobang (‽) and to reintroduce the letter thorn (Þ) to English.
Instead of modifier plus keypress, it's modifier, and a 4 digit combination that I'll never remember.
In X amount of time a significant majority of road traffic will be bots in the drivers seat (figuratively), and a majority of said traffic won't even have a human on-board. It will be deliveries of goods and food.
I look forward to the various security mechanisms required of this new paradigm (in the way that someone looks forward to the tightening spiral into dystopia).
I actually prefer to work in the office, it's easier for me to have separate physical spaces to represent the separate roles in my life and thus conduct those roles. It's extra effort for me to apply role X where I would normally be applying role Y.
Having said that, some of the most productive developers I work with I barely see in the office. It works for them to not have to go through that whole ... ceremoniality ... required of coming into the office. They would quit on the spot if they were forced to come back into the office even only twice a week, and the company would be so much worse off without them. By not forcing them to come into the office, they come in on their own volition and therefore do not resent it and therefore do not (or are slower to) resent their company of employment.
Nah. That's assuming most cars today, with literal, not figurative, humans are delivering goods and food. But they're not: most cars during traffic hours and by very very very far are just delivering groceries-less people from point A to point B. In the morning: delivering human (usually by said human) to work. Delivering human to school. Delivering human back to home. Delivering human back from school.
Accidents Georg, who lives in a windowless car ans hits someone over 10,000 times each day, is an outlier and should not have been counted
Drivers are literally the biggest cause of deaths of young people. We should start applying the same safety standards we do to every other part of life.
By the way, I don't bike but I walk about everywhere lately. So to hyperbolize as it's the custom on the internets, i live in constant fear not of cars, but of super holier than you eco cyclists running me over. (Yea, I'm not in NL.)
Anyway, a fix that should work fine for both of you is to take a lane from cars and devote it to cyclists. Nobody actually wants to bike where people walk, some places just have bad infrastructure.
They think they're martyrs or something. What am I then if i take a backpack and do my shopping on foot? I'm even more eco because I didn't spend manufacturing resources on a bike, and even more of a martyr because walking is slow.
> to take a lane from cars and devote it to cyclists. Nobody actually wants to bike where people walk
Yep, see my NL reference :)
https://news.ycombinator.com/item?id=46674621 and https://news.ycombinator.com/item?id=46673930 are the top comments and that's about as good as HN gets.
Answer? Probably "of course not"
They're too busy demonetizing videos, aggressively copyright striking things, or promoting Shorts, presumably
Which will eventually get worked around and can easily be masked by just having a backing track.
Think about it— the robots didn’t invent the em-dash. They’re copying it from somewhere.
Seriously, she used dashes all the time. Here is a direct copy and paste of the first two stanzas of her poem "Because I count not stop for Death" from the first source I found, https://www.poetryfoundation.org/poems/47652/because-i-could...
Because I could not stop for Death –
He kindly stopped for me –
The Carriage held but just Ourselves –
And Immortality.
We slowly drove – He knew no haste
And I had put away
My labor and my leisure too,
For His Civility –
Her dashes have been rendered as en dashes in this particular case rather than em dashes, but unless you're a typography enthusiast you might not notice the difference (I certainly didn't and thought they were em dashes at first). I would bet if I hunted I would find some places where her poems have been transcribed with em dashes. (It's what I would have typed if I were transcribing them).https://www.edickinson.org/editions/1/image_sets/12174893
Dickinson's dashes tended to vary over time, and were not typeset during her lifetime (mostly). Also, mid-19th century usage was different—the em-dash was a relatively new thing.
Long-press on the hyphen on most Android keyboards.
Or open whenever "Character Map" application that usually comes with any desktop OS, and copy it from there.
I also use en dashes when referring to number ranges, e.g., 1–9
I would wager good money that the proliferation of em-dashes we see in LLM-generated text is due to the fact that there are so many correctly used em-dashes in publicly-available text, as auto-corrected by Word...
The HN text area does not insert em-dashes for you and never has. On my phone keyboard it's a very lot deliberate action to add one (symbol mode, long press hyphen, slide my finger over to em-dash).
The entire point is it's contextual - emdashes where no accomodations make them likely.
I think the emoji one is most pronounced in bullet point lists. AI loves to add an emoji to bullet points. I guess they got it from lists in hip GitHub projects.
The other one is not as strong but if the "not X but Y" is somewhat nonsensical or unnecessary this is very strong indicator it's AI.
I see this way more often on GitHub now than I did before, though.
Why not? Surely you can ask your friendly neighbourhood AI to run a consistent channel for you?
Uh-oh. Caught you. Bang to rights! That post is firmly AI. Bad. Nobody should mind your robot posts.
I did ask G'mini for synonyms. And to do a cursory count of e's in my post. Just as a 2nd opinion. It found only glyphs with quotation marks around it. It graciously put forward a proxy for that: "the fifth letter".
It's not oft that you run into such alluring confirmation of your point.
My first post took around 6 min & a dictionary. This post took 3. It's a quick skill.
No LLMs. Ctrl+f shows you all your 'e's without switching away from this tab. (And why count it? How many is not important, you can simply look if any occur and that's it)
Down with that foul fifth glyph! Down, I say!
(Assuming you did actually hand craft that I thumbs-up both your humor and industry good sir)
I rEgrEt that I havE not donE thE samE, but plEase accEpt bad formatting as a countErpoint.
I do the same on my websites. It's embedded into my static site generator.
Very related: https://practicaltypography.com/
Even if I'm 100% certain it's not AI slop, it's still a very strong indicator that the videos are some kind of slop.
This is "innocent" if you accept that the author's goal is simplify to maximize engagement and YouTube is helping them do that. It's not if you assume the author wants users to see exactly what they authored.
Of course there are still "trusted" mainstream sources, expect they can inadvertently (or for other reasons) misstate facts as well. I believe it will get harder and harder to reason about what's real.
You get it wrong. Real-world content will become indistinguishable from "AI" content because that's what people will consider normal.
Many people seek being outraged. Many people seek to have awareness of truth. Many people seek getting help for problems. These are not mutually exclusive.
Just because someone fakes an incident of racism doesn't mean racism isn't still commonplace.
In various forms, with various levels of harm, and with various levels of evidence available.
(Example of low evidence: a paper trail isn't left when a black person doesn't get a job for "culture fit" gut feel reasons.)
Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered, with the goal of discrediting the position that the fake initially seemed to support.
(Famous alleged example, in second paragraph: https://en.wikipedia.org/wiki/Killian_documents_controversy#... )
> Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered
I read it as "producing racist videos can sometimes be used in good faith"?
Creating all kinds of meta-levels of falsity is a real thing, with multiple lines of objective (if nefarious) motivation, in the information arena.
But even physical crimes can have meta information purposes. Putin for instance is fond of instigating crimes in a way that his fingerprints will inevitably be found, because that is an effective form of intimidation and power projection.
Edit: please, prove your illiteracy and lack of critical thinking skills in the comments below
Do you realize how crazy this sounds?
Edit: I literally demonstrate my ability to think critically.
I think many here would say "yes!" to this question, so can saying "no" be justified by an anti-racist?
Generally I prefer questions that do not lead to thoughts being terminated. Seek to keep a discussion not stop it.
On the subject of this thread, these questions are quite old and are related to propaganda: is it okay to use propaganda if we are the Good Guys, if, by doing so, it weakens our people to be more susceptible to propaganda from the Bad Guys. Every single one of our nations and governments think yes, it's good to use propaganda.
Because that's explicitly what happened during the rise of Nazi Germany; the USA had an official national programme of propaganda awareness and manipulation resistance which had to be shut down because the country needed to use propaganda on their own citizens and the enemy during WW2.
So back to the first question, its not the content (whether it's racist or not) it's the effect: would producing fake content reach a desired policy goal?
Philosophically it's truth vs lie, can we lie to do good? Theologically in the majority of religions, this has been answered: lying can never do good.
But this is game theory, a dead and amoral mechanism that is mostly used by the animal kingdom. I'm sure humanity is better than that?
Propaganda is war, and each time we use war measures, we're getting closer to it.
Faking a racist video that never happend is, first of all, faking. Second, it's the same: racist and anti-racist at the same time. Third, it's falsifying the prevalence of occurrence.
If you'll add to the video a disclaimer: "this video has been AI-generated, but it shows events that happen all across the US daily" then there's no problem. Nobody is being lied to about anything. The video shows the message, it's not faking anything. But when you impersonate a real occurence, but it's a fake video, then you're lying, and it's simple as that.
Can a lie be told in good faith? I'm afraid that not even philosophy can answer that question. But it's really telling that leftist are sure about the answer!
Not sure how I feel about that, to be honest. On one hand I admire the hustle for clicks. On the other, too many people fell for it and probably never knew it was a grift, making all recipients look bad. I only happened upon them researching a bit after my own mom called me raging about it and sent me the link.
In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.
We have social networks like Facebook that require people to be angry, because anger generates engagement, and engagement generates views, and views generate ad impressions. We have outside actors who benefit from division, so they also fuel that fire by creating bot accounts that post inciting content. This has nothing to do with racism or people on one side. One second, these outside actors post a fake incident of a racist cop to fire up one side, and the next, they post a fake incident about schools with litter boxes for kids who identify as pets to fire up the other side.
Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.
But Facebook cannot "require" people do be angry. Facebook can barely even "require" people to log in, only those locked into Messenger ecosystem.
I don't use Facebook but I do use TikTok, and Twitter, and YouTube. It's very easy to filter rage bait out of your timeline. I get very little of it, mark it "uninterested"/mute/"don't recommend channel" and the timeline dutifully obeys. My timelines are full of popsci, golden retrievers, sketches, recordings of local trams (nevermind), and when AI makes an appearance it's the narrative kind[1] which I admit I like or old jokes recycled with AI.
The root of the problem is in us. Not on Facebook. Even if it exploits it. Surfers don't cause waves.
No, they do not. Nobody[1] wants to be angry. Nobody wakes up in the morning and thinks to themselves, "today is going to be a good day because I'm going to be angry."
But given the correct input, everyone feels that they must be angry, that it is morally required to be angry. And this anger then requires them to seek out further information about the thing that made them angry. Not because they desire to be angry, but because they feel that there is something happening in the world that is wrong and that they must fight.
[1]: for approximate values of "nobody"
You’re literally saying why people want to be angry.
My uneducated feeling is that, in a small society, like a pre-civilisation tribal one where maybe human emotions evolved, this is useful because it helps enact change when and where it's needed.
But that doesn't mean that people want to be angry in general, in the sense that if there's nothing in reality to be angry about then that's even better. But if someone is presented with something to be angry about, then that ship has sailed so the typical reaction is to feel the need to engage.
Yes, I think this is exactly it. A reaction that may be reasonable in a personal, real-world context can become extremely problematic in a highly connected context.
It's both that, as an individual, you can be inundated with things that feel like you have a moral obligation to react. On the other side of the equation, if you say something stupid online, you can suddenly have thousands of people attacking you for it.
Every single action seems reasonable, or even necessary, to each individual person, but because everything is scaled up by all the connections, things immediately escalate.
I don't wake up thinking "today I want to be angry", but if I go outside and see somebody kicking a cat, I feel that anger is the correct response.
The problem is that social media is a cat-kicking machine that drags people into a vicious circle of anger-inducing stimuli. If people think that every day people are kicking cats on the Internet, they feel that they need to do something to stop the cat-kicking; given their agency, that "something" is usually angry responses and attacks, which feeds the machine.
Again, they do not do that because they want to be angry; most people would rather be happy than angry. They do it because they feel that cats are being kicked, and anger is the required moral response.
At some point, I think it’s important to recognize the difference between revealed preferences and stated preferences. Social media seems adept at exposing revealed preferences.
If people seek out the thing that makes them angry, how can we not say that they want to be angry? Regardless of what words they use.
And for example, I never heard anyone who was a big Fox News, Rush Limbaugh, or Alex Jones fan who said they wanted to be angry or paranoid (to be fair, this was pre-Trump and awhile ago), yet every single one of them I saw got angry and paranoid after watching them, if you paid any attention at all.
Because their purpose in seeking it out is not to get angry, it's to stop something from happening that they perceive as harmful.
I doubt most people watch Alex Jones because they love being angry. They watch him because they believe a global cabal of evildoers is attacking them. Anger is the logical consequence, not the desired outcome. The desired outcome is that the perceived problem is solved, i.e. that people stop kicking cats.
We can chicken/egg about it all day, but at some point if people didn’t want it - they wouldn’t be doing it.
Depending on the definition of ‘want’ of course. But what else can we use?
I don’t think anyone would disagree that smokers want cigarettes, eh? Or gamblers want to gamble?
None of these people said to themselves, "I want to be angry today, and I heard that Alex Jones makes people angry, therefore I will watch Alex Jones."
A lot of people really do, and it predates any sort of media too. When they don't have outrage media they form gossip networks so they can tell each other embellished stories about mundane matters to be outraged and scandalized about.
But again in this situation the goal is not to be angry.
This sort of behaviour emerges as a consequence of unhealthy group dynamics (and to a lesser extent, plain boredom). By gossiping, a person expresses understanding of, and reinforces, their in-group’s values. This maintains their position in the in-group. By embellishing, the person attempts to actually increase their status within the group by being the holder of some “secret truth” which they feel makes them important, and therefore more essential, and therefore more secure in their position. The goal is not anger. The goal is security.
The emotion of anger is a high-intensity fear. So what you are perceiving as “seeking out a reason to be angry” is more a hypervigilant scanning for threats. Those threats may be to the dominance of the person’s in-group among wider society (Prohibition is a well-studied historical example), or the threats may be to the individual’s standing within the in-group.
In the latter case, the threat is frequently some forbidden internal desire, and so the would-be transgressor externalises that desire onto some out-group and then attacks them as a proxy for their own self-denial. But most often it is simply the threat of being wrong, and the subsequent perceived loss of safety, that leads people to feel angry, and then to double down. And in the world we live in today, that doubling down is more often than not rewarded with upvotes and algorithmic amplification.
Things that they have no fear about, and so do not register as warranting brain time.
> eager to get to the outrageously stuff.
The things which are creating a feeling of fear.
It’s not necessary for the source of a fear to exist in the present moment, nor for it to even be a thing that is real. For as long as humans have communicated, we have told tales about things that go bump in the dark. Tales of people who, through their apparent ignorance of the rules of the group, caused the wrath of some spirits who then punished the group.
It needn’t matter whether a person’s actions actually caused a problem, or whether it caused the spirits to be upset, or indeed whether the spirits actually ever existed at all. What matters is that there is a fear, and there is a story about that fear, and the story reinforces some shared group value.
> It's a pattern of behavior which old people in particular commonly fall into,
Here is the fundamental fear of many people: the fear of obsolescence, irrelevance, abandonment, and loss of control. We must adapt to change, but also often have either an inability or unwillingness to do so. And so the story becomes it is everyone else who is wrong. Sometimes there is wisdom in the story that should not be dismissed. But most often it is just an expression of fear (and, again, sometimes boredom).
What makes this hypothesis seem so unbelievable? Why does it need to be people seeking anger? What would need to be true for you to change your opinion? This discussion thread is old, so no need to spend your energy on answering if you don’t feel strongly about it. Just some parting questions to mull over in the bath, perhaps.
Thank you for raising this idea originally, and for engaging with me on it.
I disagree. Why are some of the most popular subreddits things like r/AmITheAsshole, r/JustNoMIL, r/RaisedByNarcissists, r/EntitledPeople, etc.: forums full of (likely fake) stories of people behaving egregiously, with thousands of outraged comments throwing fuel on a burning pile of outrage: "wow, your boyfriend/girlfriend/husband/wife/father/mother/FIL/MIL/neighbor/boss/etc. is such an asshole!" Why are advice/gossip columns that provide outlets for similar stories so popular? Why is reality TV full of the same concocted situations so popular? Why is people's first reaction to outrageous news stories to bring out the torches and pitchforks, rather than trying to first validate the story? Why can an outrageous lie travel halfway around the world while the truth is still getting its boots on?
I don't see anything like outrage in GP, just a vaguely implied sense of superiority (political, not racial!).
It's not built to make people angry per se - it's built to optimise for revenue generation - which so happens to be content that makes people angry.
People have discovered that creating and posting such content makes them money, and the revenue is split between themselves and the platforms.
In my view if the platforms can't tackle this problem then the platforms should be shutdown - promoting this sort of material should be illegal, and it's not an excuse to say our business model won't work if we are made responsible for the things we do.
ie while it turns out you can easily scale one side of publishing ( putting stuff out their and getting paid by ads ), you can't so easily scale the other side of publishing - which is being responsible for your actions - if you haven't solved both sides you don't have a viable business model in my view.
I think blaming it all on money ignores that this also serve political goals.
Groups spend money to manipulate public opinion. It’s a goal in and of itself that has value rather than a money making scheme.
For example, the 'Russian interference' in the 2016 US election, was I suspect mostly people trying to make money, and more importantly, was completely dwarfed by US direct political spending.
There is also a potentially equally, if not larger problem, in the politicisation of the 'anti-disinformation' campaigns.
To be honest I'm not sure if there is much of a difference between a grifter being directly paid to promote a certain point of view, and somebody being paid indirectly ( by ads ).
In both cases neither really believes in the political point they are making they are just following the money.
These platforms are enabling both.
Anger increases stickiness. Once one discovers there are other people on the site, and they are guilty of being wrong on the internet, one is incentivized to correct them. It feels useful because it feels like you're generating content that will help other people.
I suspect the failure of the system that nobody necessarily predicted is that people seem to not only tolerate, but actually like being a little angry online all the time.
This is one level of abstraction more than I deal with on a normal day.
The fake video which plays into people’s indignation for racism, is actually about baiting people who are critical about being baited by racism?
You're training yourself with a very unreliable source of truth.
Intentionally if I might add. Reddit users aren't particularly interested in providing feedback that will inevitably be used to make AI tools more convincing in the future, nobody's really moderating those subs, and that makes them the perfect target for poisoning via shitposting in the comments.
I don’t just look at the bot decision or accept every consensus blindly. I read the arguments.
If I watch a video and think it’s real and the comments point to the source, which has a description saying they use AI, how is that unreliable?
Alternatively, I watch a video and think it’s AI but a commenter points to a source like YT where the video was posted 5 years ago, or multiple similar videos/news articles about the weird subject of the video, how is that unreliable?
If bots reference real sources it's still a valid argument.
Personally, I don't think that behavior is very healthy, and the other parent comment suggested an easy "get out of jail free" way of not thinking about it anymore while also limiting anxiety: they're unreliable subreddits. I'd say take that advice and move on.
Some people, quite some time ago, also came to that conclusion. (And they did not even had AI to blame)
Any day now… right?
If the next generation can weather the slop storm, they may have a chance to re-establish new forms of authentic communication, though probably on a completely different scale and in different forms to the Web and current social media platforms.
Now that photos and videos can be faked, we'll have to go back to the older system.
I am no big fan of AI but misinformation is a tale as old as time.
It is that it is increasingly becoming indistinguishable from not-slop.
There is a different bar of believability for each of us. None of us are always right when we make a judgement. But the cues to making good calls without digging are drying up.
And it won’t be long before every fake event has fake support for diggers to find. That will increase the time investment for anyone trying to figure things out.
It isn’t the same staying the same. Nothing has ever stayed the same. “Staying the same” isn’t a thing in nature and hasn’t been the trend in human history.
But I would claim that "trusting blindly" was much more common hundreds of years ago than it is now, so we might make some progress in fact.
If people learn to be more skeptical (because at some point they might get that things can be fake) it might even be a gain. The transition period can be dangerous though, as always.
But today’s text manufacturing isn’t our grand.., well yesterday’s text manufacturing.
And pretty soon it will be very persuasive models with lots of patience and manufactured personalized credibility and attachment “helping” people figure out reality.
The big problem isn’t the tech getting smarter though.
It’s the legal and social tolerance for conflict of interests at scale. Like unwanted (or dark pattern permissioned) surveillance which is all but unavoidable, being used to manipulate feeds controlled by third parties (between us and any organic intentioned contacts), toward influencing us in any way anyone will pay for. AI is just walking through a door that has been left wide open despite a couple decades of hard lessons.
Incentives, as they say, matter.
Misinformation would exist regardless, but we didn’t need it to be a cornerstone business model with trillions of dollars of market cap unifying its globally coordinated efficient and effective, near unavoidable, continual insertion into our and our neighbors lives. With shareholders relentlessly demanding double digit growth.
Doesn’t take any special game theory or economic theory to see the problematic loop there. Or to predict it will continue to get worse, and will be amplified by every AI advance, as long as it isn’t addressed.
Yes. And I think this is what most tech-literate people fail to understand. The issue is scale.
It takes a lot of effort to find the right clip, cut it to remove its context, and even more effort to doctor a clip. Yes, you're still facing Brandolini's law[1], you can see that with the amount of effort Captain Disillusion[2] put in his videos to debunk crap.
But AI makes it 100× times worse. First, generating a convincing entirely video only takes a little bit of prompting, and waiting, no skill is required. Second, you can do that on a massive scale. You can easily make 2 AI videos a day. If you want to doctor videos "the old way", you'll need a team of VFX artists to do it at this scale.
I genuinely think that tech-literate folks, like myself and other hackernews posters, don't understand that significantly lowering the barrier to entry to X doesn't make X equivalent to what it was before. Scale changes everything.
And yes I know the argument about Youtube being a platform it can be used for good and bad. But Google control and create the algorithm and what is pushed to people. Make it a dumb video hosting site like it used to be and I'll buy the "bad and good" angle.
On the actual open decentralized internet, which still exists, mastodon, IRC, matrix... bots are rare.
Any platform that wants to resist bots need to - tie personas to real or expensive identities - force people to add AI flag to AI content - let readers filter content not marked as AI - and be absolutely ruthless in permabanning anyone who posts AI content unmarked, one strike and you are dead forever
The issue then becomes that marking someone as “posts unmarked AI content” becomes a weapon. No idea about how to handle it.
Group sizes were smaller and as such easier to moderate. There could be plenty of similar interest forums which meant even if you pissed of some mods, there were always other forums. Invite only groups that recruited from larger forums (or even trusted members only sections on the same forum) were good at filtering out low value posters.
There were bots, but they were not as big of a problem. The message amplification was smaller, and it was probably harder to ban evade.
So do it. Forums haven't gone away, you just stopped going to them. Search for your special interest followed by "Powered by phpbb" (or Invision Community, or your preferred software) and you'll find plenty of surprisingly active communities out there.
IME young people use Discord, and those servers often require permission to even join. Nearly all my fandom communications happen on a few Discord servers, most of which you cannot join without an invitation, and if you're kicked (bad actors will be kicked), you cannot re-join (without permission).
It would certainly be fun to trick people I dislike into posting AI content unknowingly. Maybe it has to be so low-key that they aren't even banned on the first try, but that just seems ripe for abuse.
I want a solution to this problem too, but I don't think this is reasonable or practical. I do wonder what it would mean if, philosophically, there were a way to differentiate between "free speech" and commercial speech such that one could be respected and the other regulated. But if there is such a distinction I've never been able to figure it out well enough to make the argument.
People left and never came back.
But those bots were certainly around in the 90s
Then the comments are all usually not critical of the image but to portray the people supporting the [fake] image as being in a cult. It's wild!
Customer asked if reporting these kinds of illegal ads would be the best course. Nope, not by a long shot. As long as Google gets its money, they will not care. Ads have become a cancer of the internet.
Maybe i should setup a Pi-Hole business...
Also can you set Windows not to allow Ads notifications through to the notification bar? If not that should also be a point of the law.
Now I bet somebody is going to come along and scold me for trying to solve social problems by suggesting laws be made.
... which doesn't sound impossible. It's also entirely possible that the value of Section 230 has run its course and it should generally be remarkably curtailed (its intent was to make online forums and user-generated-content networks, of which ad networks are a kind, possible, but one could make the case that it has been demonstrated that operators of online forums have immense responsibility and need to be held accountable for more of the harms done via the online spaces they set up).
divisiveness this kind of stuff will create
I'm pretty sure we're already decades in to the world of "has created".Everyone I know has strong opinions on every little thing, based exclusively their emotional reactions and feed consumption. Basically no one has the requisite expertise commensurate with their conviction, but being informed is not required to be opinionated or exasperated.
And who can blame them (us). It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite. And each little snippet worms its way into your brain (and well being) one way or the other.
It's just been too much for too long and you can tell.
As someone who’s read a newspaper daily for 30+ years, that is definitely not true. The news has always tried to capture your attention but doing so using anger and outrage, and using those exclusively, is a newer development. Newspapers and broadcast news used to use humor, suspense, and other things to provoke curiosity. When the news went online, it became focused on provoking anger and outrage. Even print edition headlines tend to be tamer than what’s in the online edition.
Its odd to me to still use "luddite" disparagingly while implying that avoiding certain tech would actually have some high impact benefits. At that point I can't help but think the only real issue with being a luddite is not following the crowd and fitting in.
They didn't say to avoid certain tech. They said to avoid takes and news headlines.
Your conflation of those two is like someone saying "injecting bleach into your skin is bad" and you responding with "oh, so you oppose cleaning bathrooms [with bleach]?"
Your bleach scenario is confusing to me, its also you arguing against something completely unrelated to the discussion here.
In fact, it's easier than ever to see the intended benefit of such a lifestyle.
It really isn't that hard, if I'm looking at my experience. Maybe a little stuff on here counts. I get my news from the FT, it's relatively benign by all accounts. I'm not sure that opting out of classical social media is particularly luddite-y, I suspect it's closer to becoming vogue than not?
Being led around by the nose is a choice still, for now at least.
I mostly get gaming and entertainment news for shows I watch, but even between those I get CNN and Fox News, both which I view as "opinion masquerading as news" outlets.
My mom shares so many articles from her FB feed that are both mainstream (CNN, etc) nonsense and "influencer" nonsense.
I have no news feed on my phone. I doubt on android it is any harder to evade. Social media itself is gone. The closest I get to click-bait is when my mother spouts something gleaned from the Daily Mail. That vector is harder to shift I concede!
Case in point: if you ask for expertise verification on HN you get downvoted. People would rather argue their point, regardless of validity. This site’s culture is part of the problem and it predates AI.
A luddite would refuse the covid vaccine. They'd refuse improved trains. They'd refuse EVs. etc. This is because ludditism is the blanket opposition to technological improvements.
That’s a very unfair accusation to throw at someone off the cuff. Anyway, what you wrote is not what a Luddite is at all, especially not the anti-vaccine accusation. I don’t think you’re being deliberately deceptive here, I think you just don’t know what a Luddite is (was).
For starters: They were not anti-science/medicine/all technology. They did not have “blanket opposition to all technological improvement.” You’re expressing a common and simplistic misunderstanding of the movement and likely conflating it with (an also flawed understanding of) the Amish.
They were, at their core, a response against industrialization that didn’t account for the human cost. This was at the start of the 19th century. They wanted better working conditions and more thoughtful consideration as industrialization took place. They were not anti-technology and certainly not anti-vaccine.
The technology they were talking about was mostly related to automation in factories which, coupled with anti-collective bargaining initiatives, led to further dehumanization of the workforce as well as all sorts novel and horrific workplace accidents for adults and children alike. Their call for “common sense laws” and “guardrails” are echoed today with how many of us talk about AI/LLM’s.
Great comic on this: https://thenib.com/im-a-luddite/
So we emotionally convince ourselves that we have solved the problem so we can act appropriately and continue doing things that are important to us.
The founders recognized this problem and attempted to setup a Republic as an answer to it; so that each voter didn't have to ask "do I know everything about everything so I can select the best person" and instead were asked "of this finite, smaller group, who do I think is best to represent me at the next level"? We've basically bypassed that; every voter knows who ran for President last election, hardly anyone can identify their party's local representative in the party itself (which is where candidates are selected, after all).
Most people I know who have strong political opinions (as well as those who don't) can't name their own city council members or state assemblyman, and that's a real problem for functioning representative democracy. Not only for their direct influence on local policy, but also because these levels of government also serve as the farm team or proving grounds for higher levels of office.
By the time candidates are running with the money and media of a national campaign, in some sense it's too late to evaluate them on matters of their specific policies and temperaments, and you kind of just have to assume they're going to follow the general contours of their party. By and large, it seems the entrenched political parties (and, perhaps, parties in general) are impediments to good governance.
The accidents that let it occur may no longer be present - there are arguments that "democracy" as we understand it was impossible before rapid communication, and perhaps it won't survive the modern world.
We're living in a world where a swing voter in Ohio may have more effect/impact on Iran than a person living there - or even more effect on Europe than a citizen of Germany.
Voting on principles is fine and good.
The issue is the disconnect between professed principles and action. And the fact that nowadays there are not many ways to pick and choose principles except two big preset options.
Then I am very proudly one. I don't do TikTok, FB, IG, LinkedIn or any of this crap. I do a bit of NH here and there. I follow a curated list of RSS feeds. And I twice a day look at a curated/grouped list of headlines from around the world, built from a multitude of sources.
Whenever I see a yellow press headline from the German bullshit print medium "BILD" when paying for gas or out shopping, I can't help but smile. That people pay money for that shit is - nowadays - beyond me.
To be fair. This was a long process. And I still regress sometimes. I started my working life at an editorial team for an email portal. Our job was to generate headlines that would stop people from logging in to read their mail and read our crap instead - because ads embedded within content were way better paid than around emails.
So I actually learned the trade. And learned that outrage (or sex) sells. This was some 18 or so years ago - the world changed since then. It became even more flammable. And more people seem to be playing with their matches. I changed - and changed jobs and industries a few times.
So over time I reduced my news intake. And during the pandemic learned to definitely reduce my social media usage - it is just not healthy for my state of mind. Because I am way to easily dopamine addicted and trigger-able. I am a classic xkcd.com/386 case.
Simulacra and Simulation came out in '81, for an example of how long this has been a recognized phenomenon
It’s quite easy actually. Like the OP, I have no social media accounts other than HN (which he rightfully asserts isn’t social media but is the inheritor of the old school internet forum). I don’t see the mess everyone complains about because I choose to remove myself from it. At the same time, I still write code every day, I spend way too much time in front of a screen, and I manage to stay abreast of what’s new in tech and in the world in general.
Too many people conflate social media with technology more broadly and thus make the mistake of thinking that turning away from social media means becoming a luddite. You can escape the barrage of trolls and hottakes by turning off social media while still participating in the much smaller but saner tech landscape that remains.
"Great question! No, we have always been at war with Eurasia. Can I help with anything else?"
If I just feed it to 10 pandas, today, they're all dead.
And I suspect that humanity's position in this analogy is far closer to the latter than the former.
We truly live in wonderful times!
As others have noted, it’s a long-term trend - agree that as you note it’ll get worse. The Russian psy-ops campaigns from the Internet Research Agency during Trump #1 campaign being a notable entry, where for example they set up both fake far-left and far-right protest events on FB and used these as engagement bait on the right/left. (I’m sure the US is doing the same/worse to their adversaries too.)
Whatever fraction bots play overall, it has to be way higher for political content given the power dynamics.
Also on the phrase “you’re absolute right”, it’s definitely a phrase my friends and I use a lot, albeit in a sorta of sarcastic manner when one of us says something which is obvious but, nonetheless, we use it. We also tend to use “Well, you’re not wrong” again in a sarcastic manner for something which is obvious.
And, no, we’re not from non English speaking countries (some of our parents are), we all grew up in the UK.
Just thought I’d add that in there as it’s a bit extreme to see an em dash instantly jump to “must be written by AI”
You’re not the first person I’ve seen say that FWIW, but I just don’t recall seeing the full proper em-dash in informal contexts before ChatGPT (not that I was paying attention). I can’t help but wonder if ChatGPT has caused some people - not necessarily you! - to gaslight themselves into believing that they used the em-dash themselves, in the before time.
Also, I was a curmudgeon with strong opinions about punctuation before ChatGPT—heck, even before the internet. And I can produce witnesses.
It's be just as wrong as using an apostrophe instead of a comma.
Grammar is often wooly in a widely used language with no single centralised authority. Many of the "Hard Rules" some people thing are fundamental truths are often more local style guides, and often a lot more recent than some people seem to believe.
I never saw em-dashes—the longer version with no space—outside of published books and now AI.
Just to say, though, we em-dashers do have pre-GPT receipts:
Compose, hyphen, hyphen, period: produces – (en dash) Compose, hyphen, hyphen, hyphen: produces — (em dash)
And many other useful sequences too, like Compose, lowercase o, lowercase o to produce the ° (degree) symbol. If you're running Linux, look into your keyboard settings and dig into the advanced settings until you find the Compose key, it's super handy.
P.S. If I was running Windows I would probably never type em dashes. But since the key combination to type them on Linux is so easy to remember, I use em dashes, degree symbols, and other things all the time.
There are compose key implementations for Windows, too.
> m-dash (—)
> Do not use; use an n-dash instead.
> n-dash (–)
> Use in a pair in place of round brackets or commas, surrounded by spaces.
Remember I'm specifically speaking about british english.
But I see what you mean. There used to be a distinction between a shorter dash that is used for numerical ranges, or for things named after multiple people, and a longer dash used to connect independent clauses in a sentence [1]. I am shocked to hear that this distinction is being eroded.
[0] https://design.tax.service.gov.uk/hmrc-content-style-guide/
> Spaced en rules (or ‘en dashes’) must be used for parenthetical dashes. Hyphens or em rules (‘em dashes’) will not be accepted for either UK or US style books. En rules (–) are longer than hyphens (-) but shorter than em rules (—).
Section 2.1, "Editorial services style guide for academic books" https://www.cambridge.org/authorhub/resources/publishing-gui...
If you have the Compose key [1] enabled on your computer, the keyboard sequence is pretty easy: `Compose - - -` (and for en dash, it's `Compose - - .`). Those two are probably my most-used Compose combos.
I like em-dashes and will continue to use them.
Yes, that is more or less what "hot take" means.
Did you mean American style guides prefer the latter?
- Tell you what makes em dashes appealing.
- Help you use em dashes more.
- Give you other grammatical quirks smart people have.
Just tell me.
(If bots RP as humans, it’s only natural we start RP as bots. And yes, I did use a curly quote there.)
* **Veneer of authenticity**: because of the difficulty of typing em-dashes in typical form-submission environments, many human posters tend to forgo them.
* **Social pressure**: even if you take strides to make em-dashes easier to type, including them can have negative repercussions. A large fraction of human audiences have internalized a heuristic that "em-dash == LLM" (which could perhaps be dubbed the "LLM-dash hypothesis"). Using em-dashes may risk false accusations, degradation of community trust, and long-winded meta discussion.
* **Unicode support**: some older forums may struggle with encoding for characters beyond the standard US-ASCII range, leading to [mojibake](https://en.wikipedia.org/wiki/Mojibake).
You can read it yourself if you'd like: https://news.ycombinator.com/item?id=46589386
It was not just the em dashes and the "absolutely right!" It was everything together, including the robotic clarifying question at the end of their comments.
I think this one is a much closer fit: https://news.ycombinator.com/item?id=46661308
According to what I know, the correct way to use em-dash is to not surround it by spaces, so words look connected like--this. And indeed, when I started to use em-dashes in my blog(s), that's how I did it. But I found it rather ugly, so I started to put spaces around it. And there were periods where I stopped using em-dash all together.
I guess what I'm trying to say is that unless you write as a profession, most people are inconsistent. Sometimes, I use em-dashes. Sometimes I don't. In some cases I capitalize my words where needed, and sometimes not, depending on how in a hurry I am, or whether I type from a phone (which does a lot of heaving lifting for me).
If you see someone who consistently uses the "proper" grammar in every single post on the internet, it might be a sign that they use AI.
Likewise. I used to copy/paste them when I couldn't figure out how to actually type them, lol. Or use the HTML char code `—` It sucks that good grammar now makes people assume you used AI.
LLMs use em-dash because people (in their training data) used em-dash. They use "You're absolutely right" because that's a common human phrase. It's not "You write like an LLM", it's "The LLMs write kind of like you", and for good reasons, that's exactly what people been training them to do.
And yes, "pun" intended for extra effect, that also comes from humans doing it.
[1]: https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li... [2]: https://www.nytimes.com/2025/12/03/magazine/chatbot-writing-...
It's still frequently identifiable in (current-generation) LLM text by the glossy superficiality that comes along with these usages. For example, in "It's not just X, it's Y", when a human does this it will be because Y materially adds something that's not captured by X, but in LLM output X and Y tend to be very close in meaning, maybe different in intensity, such that saying them both really adds nothing. Or when I use "You're absolutely right" I'll clarify what they are right about, whereas for the LLM it's just an empty affirmation.
For the past 15 years, I’ve used the Unicycle Vim plugin¹ which makes it very easy to add proper typographic quotes and dashes in Insert mode. As something of a typography nerd, I’ve extended it to include other Unicode characters, e.g., prime and double-prime characters to represent minutes and seconds.
At the same time, I’ve always used a Firefox extension that launches GVim when editing a text box; currently, I’m using Tridactyl for this purpose.
hyphens are so hideous that I can't stand them.
I'm sure it's happening, but I don't know how much.
Surely some people are running bots on HN to establish sockpuppets for use later, and to manipulate sentiment now, just like on any other influential social media.
And some people are probably running bots on HN just for amusement, with no application in mind.
And some others, who were advised to have an HN presence, or who want to appear smarter, but are not great at words, are probably copy&pasting LLM output to HN comments, just like they'd cheat on their homework.
I've gotten a few replies that made me wonder whether it was an LLM.
Anyway, coincidentally, I currently have 31,205 HN karma, so I guess 31,337 Hacker News Points would be the perfect number at which to stop talking, before there's too many bots. I'll have to think of how to end on a high note.
(P.S., The more you upvote me, the sooner you get to stop hearing from me.)
32,767 can be the hard max., to permit rare occasional comments after that.
Even this submission is out of date as images no longer have the mangled hand issues.
We are actually blessed right now in that it's easy to spot AI posts. In 6 months or so, things will be much harder. We are cooked.
YouTube and others pay for clicks/views, so obviously you can maximize this by producing lots of mediocre content.
LinkedIn is a place to sell, either a service/product to companies or yourself to a future employer. Again, the incentive is to produce more content for less effort.
Even HN has the incentive of promoting people's startups.
Is it possible to create a social network (or "discussion community", if you prefer) that doesn't have any incentive except human-to-human interaction? I don't mean a place where AI is banned, I mean a place where AI is useless, so people don't bother.
The closest thing would probably be private friend groups, but that's probably already well-served by text messaging and in-person gatherings. Are there any other possibilities?
Blogs can have ads, but blogs with RSS feeds are a safer bet as it's hard to monetize an RSS feed. Blogs are a great place to find people who are writing just because they want to write. As I see more AI slop on social media, I spend more time in my feed reader.
Kagi's small web lens seems to have a similar goal but doesn't really get there. It still includes results that have advertising, and omits stuff that isn't small but is ad free, like Wikipedia or HN.
I think its worth exploring.
BTW, HN has ads. They post job openings at their start ups, these appear like posts but you can't vote on them. Launch HN is another way this site is monetized, although that seems quite reasonable and you can vote/flag those posts like any other.
spot on. The number of times I've came across a poorly made video where half the comments are calling out its inaccuracies. In the end Youtube (or any other platform) and the creator get paid. Any kind of negative interaction with the video either counts as engagement or just means move on to the next whack-a-mole variant.
None of these big tech platforms that involve UGC were ever meant to scale. They are beyond accountable.
Makes it harder to see if you're watching an ad
Completely coincidental
Elon Musk cops a lot for the degradation of twitter to people who care about that sort of thing, and he definitely plays a part there, but its the monetisation aspect that was the real tilt to all noise in a signal to noise ratio perspective
We've taken a version of the problem in the physical world to the digital world. It runs along the same lines of how high rents (commercial or residential) limit the diversity of people or commercial offering in a place simply because only a certain thing can work or be economically viable. People always want different mixes of things and offering but if the structure (in this case rent) only permits one type of thing then that's all you're going to get
The biggest problem I see is that the Internet has become a brainwashing machine, and even if you have someone running the platform with the integrity of a saint, if the platform can influence public opinion, it's probably impossible to tell how many real users there actually are.
Any community that ends up creating utility to its users, will attract automation, as someone tries to extract, or even destroy that utility.
A potential option could be figuring out community rules that ensure all content. including bot generated content, provides utility to users. Something like the rules on change my view, or r/AITA. Theres also tests being run to see if LLMs can identify or provide bridges across flamewars.
It's only recently, when I was considering to revive the old-school forum interaction, that I have realized that while I got the platforms for free, there were people behind them who paid for the hosting and the storage, and were responsible to moderate the content in order to not derail every discussion to low level accusation and name calling contest.
I can't imagine the amount of time, and tools, it takes to keep discussion forums free of trolls, more so nowadays, with LLMs.
This is specifically in the context of a niche hobby website where the rules are simple and identifying rule-breaking content is easy. I'm not sure it would work on something with universal scope like Reddit or Facebook, but I'd rather we see more focused communities anyway.
But all the while they were doing legitimate reporting, when they came across their real cheating account they'd report not cheating. And supposedly this person got away with it for years for having good reputable community reporting with high alignment scores.
I know 1 exception doesnt mean it's not worth it. But we must acknowledge the potential abuse. Id still rather have 1 occasionally ambitious abuser over countless low effort ones.
1. prohibit all sorts of advertising, explicit and implicit, and actually ban users for it. The reason most people try to get big on SM is so they can land sponsorships outside of the app. But we'd still have the problem of telling whether something is sponsored or not.
2. no global feed, show users what their friends/followers are doing only. You can still have discovery through groups, directories, etc. But it would definitely be worse UX than what we currently have.
Yes, but its size must be limited by Dunbar's number[0]. This is the maximum size of a group of people where everyone can know everyone else on a personal basis. Beyond this, it becomes impossible to organically enforce social norms, and so abstractions like moderators and administrators and codes of conduct become necessary, and still fail to keep everyone on the same page.
To take a different cognitive domain, think about color. Wikitionary gives around 300 of them for English[1]. I doubt many English speakers would be able to use all of them with relevant accuracy. And obviously even RGB encoding allows to express far more nuances. And obviously most people can fathom far more nuances than what could verbalize.
Yes, it is possible. Like anything worth, it is not easy. I am a member of a small forum of around 20-25 active users for 20 years. We talk all kind of stuff, it was initially just IT-related, but we also touch motorcycles (at least 5 of us do or did ride, I used to go ride with a couple of them in the past), some social aspects, tend to avoid politics (too divisive) and religion (I think none is religious enough to debate). We were initially in the same country and some were meeting IRL from time to time, but now we are spread in many places around Europe (one in US), so the forum is what keeps us in contact. Even the ones in the same country, probably a minority these days, are spread too thin, but the forum is there.
If human interaction involves IRL, I met less that 10 forum members and I met frequently just 3 (2 on motorcycle trips, one worked for a few years in the same place as I), but that is not a metric that means much. It is the false sense of being close over internet while being geographically far, which works in a way but not really. For example my best friends all emigrated, most were childhood friends, communicating to them on the phone or Internet makes me never feel lonely, but seeing them every few years makes grows the distance between us. That is impacting human to human interaction, there is no way around it.
the idea being that you'd somewhat ensure the person is a human that _may well_ know what they're talking about e.g. `abluecloud from @meta.com`.
Kill the influencer, kill the creator. Its all bullshit.
> What if people DO USE em-dashes in real life?
They do and have, for a long time. I know someone who for many years (much longer than LLMs have been available) has complained about their overuse.
> hence, you often see -- in HackerNews comments, where the author is probably used to Markdown renderer
Using two dashes for an em-dash goes back to typewriter keyboards, which had only what we now call printable ASCII and where it was much harder add to add non-ASCII characters than it is on your computer - no special key combos. (Which also means that em-dashes existed in the typewriter era.)
I do and so do a number of others, and I like Oxford commas too.
Now any photo can be faked, so the only photos to take are ones that you want yourself for memories.
dude, hate to break it to you but the fact that it's your "one and only" makes it more convincing it's your social network. if you used facebook, instagram, and tiktok for socializing, but HN for information, you would have another leg to stand on.
yes, HN is "the land of misfit toys", but if you come here regularly and participate in discussions with other other people on a variety of topics and you care about the interactions, that's socializing. The only reason you think it's not is that you find actual social interaction awkward, so you assume that if you like this it must not be social.
If no human ever used that phrase, I wonder where the ai's learned it from? Have they invented new mannerisms? That seems to imply they're far more capable than I thought they were
Reinforced with RLHF? People like it when they're told they're right.
How sick and tired I am of this take. Okay, people are just bags of bones plus slightly electrified boxes with fat and liquid.
There's a new one, "wired" I have "wired" this into X or " "wires" into y. Cortex does this and I have noticed it more and more recently.
It super sticks out because who the hell ever said that X part of the program wires into y?
It may grate but to me, it grates less than "correct" which is a major sign of arrogant "I decide what is right or wrong" and when I hear it, outside of a context where somebody is the arbiter or teacher, I switch off.
But you're absolutely wrong about youre absolutely right.
It's a bit hokey, but it's not a machine made signifier.
I feel things are just as likely to get to the point where real people are commonly declared AI, as they are to actually encounter the dead internet.
Maybe it is a UK thing?
https://en.wikipedia.org/wiki/The_Unbelievable_Truth_(radio_...
I love that BBC radio (today: BBC audio) series. It started before the inflation of 'alternative facts' and it is worth (and very funny and entertaining) to follow, how this show developed in the past 19 years.
1. People who live in poorer countries who simply know how to rage bait and are trying to earn an income. In many such countries $200 in ad revenue from Twitter, for example, is significant; and
2. Organized bot farms who are pushing a given message or scam. These too tend to be operated out of poorer countries because it's cheaper.
Last month, Twitter kind of exposed this accidentally with an interesting feature where it showed account location with no warning whatsoever. Interestingly, showing the country in the profile got disabled from government accounts after it raised some serious questions [1].
So I started thinking about the technical feasibility of showing location (country or state for large countries) on all public social media ccounts. The obvious defense is to use a VPN in the country you want to appear to be from but I think that's a solvable problem.
Another thing I read was about NVidia's efforts to combat "smuggling" of GPUs to China with location verification [2]. The idea is fairly simple. You send a challenge and measure the latency. VPNs can't hide latency.
So every now and again the Twitter or IG or Tiktok server would answer an API request with a challenge, which couldn't be antiticpated and would also be secure, being part of the HTTPS traffic. The client would respond to the challenge and if the latency was 100-150ms consistently despite showing a location of Virginia then you can deem them inauthentic and basically just downrank all their content.
There's more to it of course. A lot is in the details. Like you'd have to handle verified accounts and people traveling and high-latency networks (eg Starlink).
You might say "well the phone farms will move to the US". That might be true but it makes it more expensive and easier to police.
It feels like a solvable problem.
[1]: https://www.nbcnews.com/news/us-news/x-new-location-transpar...
[2]: https://aihola.com/article/nvidia-gpu-location-verification-...
I am sick of the em-dash slander as a prolific en- and em-dash user :(
Sure for the general population most people probably don't know, but this article is specifically about Hacker News and I would trust most of you all to be able to remember one of:
- Compose, hyphen, hyphen, hyphen
- Option + Shift + hyphen
(Windows Alt code not mentioned because WinCompose <https://github.com/ell1010/wincompose>)
- OpenAI uses the C2PA standard [0] to add provenance metadata to images, which you can check [1]
- Gemini uses SynthId [2] and adds a watermark to the image. The watermark can be removed, but SynthId cannot as it is part of the image. SynthId is used to watermark text as well, and code is open-source [3]
[0] https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-...
[1] https://verify.contentauthenticity.org/
I know the metadata is probably easy to strip, maybe even accidentally, but their own promotional content not having it doesn't inspire confidence.
AI content outnumbers Real content. We are not going to decide if every single thing is real or not. C2PA is about labeling the gold in a way the dirt can't fake. A Photo with it can be considered real and used in an encyclopedia or sent court without people doubting it.
That's not quite right. SynthID is a digital watermark, so it's hard to remove, while metadata can be easily removed.
Not only is it impossible to adjudicate or police, I feel like this will absolutely have a chilling effect on people wanting to share their projects. After all, who wants to deal with an internet mob demanding that you disprove a negative? That's not what anyone who works hard on a project imagines when they select Public on GitHub.
People are no more required to disclose their use of LLMs than they are to release their code... and if you like living in a world where people share their code, you should probably stop demanding that they submit to your arbitrary purity tests.
It honestly felt like being gaslighted. You see one thing, but they keep claiming you are wrong.
I'd feel the same way you did, for sure.
You are absolutely right! ;)
This is equal to projects where guys from high school took Ubuntu, changed the logo in couple of places, and then made statements that they'd make a new OS.
Anybody minimally competent can see the childish exaggeration in both cases.
The most logical request is to grow up and be transparent of what you did, and stop lying.
Just absolutely loved it. Everyone was wondering how deepfakes are going to fool people but on HN you just have to lie somewhere on the Internet and the great minds of this site will believe it.
If AI would cost you what it actually costs, then you would use it more carefully and for better purposes.
Non-native speaker here: huh, is "you are absolutely right" wrong somehow? I.e., are you a bad english speaker for using it? Fully agree (I guess "fully agree" is the common one?) with this criticism of the article, to me that colloquialism does not sound fishy at all.
There might also be two effects at play:
1. Speech "bubbles" where your preferred language is heavily influenced by where you grew up. What sounds common to you might sound uncommon in Canada.
2. People have been using LLMs for years at this point so what is common for them might be influenced by what they read from LLM output. So while initially it was an LLM colloquialism it could have been popularized by LLM usage.It makes sense in English, however:
a) "you are" vs "you're". "you are" sounds too formal/authoritative in informal speech, and depending on tone, patronising.
b) one could say "you're absolutely right", but the "absolutely" is too dramatic/stressed for simple corrections (an example of sycophancy in LLMs)
If the prompt was something like "You did not include $VAR in func()", then a response like "You're right! Let me fix that.." would be more natural.
Interestingly, "absolutely right" is very common in German: "du hast natürlich absolut Recht" is something which I can easily imagine a friend's voice (or my voice) say at a dinner table. It's "du hast Recht" that sounds a little bit too formal and strong x[.
Agreed on the sycophancy point, in Gemini I even have a preamble that basically says "don't be a sycophant". It still doesn't always work.
Using this kind of strategy eventually leads to the LLM recurrently advertising what it just produced as «straight to the point, no fluff, no bullshit». («Here is the blunt truth»).
Of course no matter how the LLM advertise its production, it is too often non devoid of sycophancy.
With how blame-avoidant western individualist culture can be, seeing something "admit" doing wrong so quickly, and so emphatically, could be uncanny valley-level jarring.
> You're absolutely right!
And you're good
But it was a long death struggle, bleeding out drop by drop. Who remembers that people had to learn netiquette before getting into conversations? That is called civilisation.
The author of this post experienced the last remains oft that culture in the 00s.
I don't blame the horde of uneducated home users who came after the Eternal September. They were not stupid. We could have built a new culture together with them.
I blame the power of the profit. Big companies rolled in like bulldozers. Mindless machines, fueled by billions of dollars, rolling in the direction of the next ad revenue.
Relationships, civilization and culture are fragile. We must take good take of them. We should. but the bulldozers destroyed every structure they lived in in the Internet.
I don't want to whine. There is a learning: money and especially advertising is poison for social and cultural spaces. When we build the next space where culture can grow, let's make sure to keep the poison out by design.
I call this the "carpet effect". Where all carpets in Morocco have an imperfection, lest it impersonates god.
I wonder if this does apply to the same magnitude in the real world. It's very easy to see this phenomenon on the internet because it's so vast and interconnected. Attention is very limited and there is so much stuff out there that the average user can only offer minimal attention and effort (the usual 80-20 Pareto allocation). In the real world things are more granular, hyperlocal and less homogeneous.
> The Oxford Word of the Year 2025 is rage bait
> Rage bait is defined as “online content deliberately designed to elicit anger or outrage by being frustrating, provocative, or offensive, typically posted in order to increase traffic to or engagement with a particular web page or social media content”.
https://corp.oup.com/news/the-oxford-word-of-the-year-2025-i...
I don't know mind people using AI to create open source projects, I use it extensively, but have a rule that I am responsible and accountable for the code.
Social media have become hellscapes of AI Slop of "Influencers" trying to make quick money by overhyping slop to sell courses.
Maybe where you are from the em dash is not used, but in Queen's English speaking countries the em dash is quite common to represent a break of thought from the main idea of a sentence.
That someone is vibe-coding their application, running the symptoms of their illness through ChatGPT, or using it to write the high school essay isn't really a problem. It's also not a massive issue that you can generate random videos, the problem is that social media not only allows propaganda to be spread at the speed of light without any filter and verification, that it actually encourage it in the name of profit.
Remove the engagement retaining algorithms from social media, stop measuring success in terms of engagement. Do that an AI will become much less of a problem. LLMs are a tool for generation, social media is transport mechanism that makes the generated content actually dangerous.
It used to be Internet back when the name was still written in the capital first letter. The barrier to utilize the Internet was high enough that mostly only the genuinely curious and thoughtful people a) got past it and b) did have the persistence to find interesting stuff to read and write about on it.
I remember when TV and magazines were full of slop of the day at the time. Human-generated, empty, meaningless, "entertainment" slop. The internet was a thousand times more interesting. I thought why would anyone watch a crappy movie or show on TV or cable, created by mediocre people for mere commercial purposes, when you could connect to a lone soul on the other side of the globe and have intelligent conversations with this person, or people, or read pages/articles/news they had published and participate in this digital society. It was ethereal and wonderful, something unlike anything else before.
Then the masses got online. Gradually, the interesting stuff got washed in the cracks of commercial internet, still existing but mostly just being overshadowed by everything else. Commercial agenda, advertisements, entertainment, company PR campaigns disguised as articles: all the slop you could get without even touching AI. With subcultures moving from Usenet to web forums, or from writing web articles to posting on Facebook, the barrier got lowered until there was no barrier and all the good stuff got mixed with the demands and supplies of everything average. Earlier, there always were a handful of people in the digital avenues of communication who didn't belong but they could be managed; nowadays the digital avenues of communication are open for everyone and consequently you get every kind of people in, without any barriers.
And where there are masses there are huge incentives to profit from them. This is why internet is no longer an infrastructure for the information superhighway but for distributing entertainment and profiting from it. First, transferring data got automated and was dirt cheap, now creating content is being automated and becomes dirt cheap. The new slop oozes out of AI. The common denominator of internet is so low the smart people get lost in all the easily accessed action. Further, smart people themselves are now succumbing in it because to shield yourself from all the crap that is the commercial slop internet you basically have to revert to being a semi-offline hermit, and that goes against all the curiosity and stimuli deeply associated with smart people.
What could be the next differentiator? It used to be knowledge and skill: you had to be a smart person to know enough and learn enough to get access. But now all that gets automated so fast that it proves to be no barrier.
Attention span might be a good metric to filter people into a new service, realm, or society eventhough, admittedly, it is shortening for everyone but smart people would still win.
Earlier solutions such as Usenet and IRC haven't died but they're only used by the old-timers. It's a shame because then the gathering would miss all the smart people grown in the current social media culture: world changes and what worked in the 90's is no longer relevant except for people who were there in the 90's.
Reverting to in-real-life societies could work but doesn't scale world-wide and the world is global now. Maybe some kind of "nerdbook": an open, p2p, non-commercial, not centrally controlled, feedless facebook clone could implement a digital club of smart people.
The best part of setting up a service for smart people is that it does not need to prioritize scaling.
This particular lament is nothing new, and is also known as Eternal September, first described in 1994.
1. There are channels specialized in topics like police bodycam and dashcam videos, or courtroom videos. AI there is used to generate voice (and sometimes a very obviously fake talking head) and maybe the script itself. It seems a way to automatize tasks.
2. Some channels are generating infuriating videos about fake motorbikes releases. Many.
Reminds me of those times in Germany when mainstream media and people with decades in academia used the term "Putin Versteher" (Person who gets Putin, Putin 'understander') ad nauseaum ... it was hilarious.
Unrelated to that, sometime last year, I searched "in" ChatGPT for occult stuff in the middle of a sleepless night and it returned a story about "The Discordians", some dudes who ganged up in a bowling hall in the 70's and took over media and politics, starting in the US and growing globally.
Musk's "Daddy N** Heil Hitler" greeting, JD's and A. Heart's public court hearings, the Kushners being heavily involved with the recruitment department of the Epsteins Islands and their "little Euphoria" clubs as well as Epstein's "Gugu Gaga Cupid" list of friends and friends of friends, it's all somewhat connected to "The Discordians", apparently.
It was a fun "hallucination" in between short bits on Voodoo, Lovecraft and stuff one rarely hears about at all.
Recently someone accused me of being a clanker on Hackernews (firstly lmao but secondly wow) because of my "username" (not sure how it's relevant, When I had created this account I had felt a moral obligation to learn/ask for help to improving and improve I did whether its in writing skills or learning about tech)
Then I had posted another comment on Another thread in here which was talking about something similar. The earlier comment got flagged and my response to it but this stayed. Now someone else saw that comment and accused me of being AI again
This pissed me off because I got called AI twice in 24 hours. That made me want to quit hackernews because you can see from my comments that I write long comments (partially because they act as my mini blog and I just like being authentic me, this is me just writing my thoughts with a keyboard :)
To say that what I write is AI feels such a high disrespect to me because I have spent some hours thinking about some of the comments I made here & I don't really care for the upvotes. It's just this place is mine and these thoughts are mine. You can know me and verify I am real by just reading through the comments.
And then getting called AI.... oof, Anyways, I created a tell HN: I got called clanker twice where I wrote the previous thing which got flagged but I am literally not kidding but the first comment came from an AI generated bot itself (completely new account) I think 2 minute or something afterwards which literally just said "cool"
Going to their profile, they were promoting some AI shit like fkimage or something (Intentionally not saying the real website because I don't want those bots to get any ragebait attention to conversions on their websites)
So you just saw the whole situation of irony here.
I immediately built myself a bluesky thread creator where I can write a long message and it would automatically loop or something (ironically built via claude because I don't know how browser extensions are made) just so that I can now write things in bluesky too.
Funny thing is I used to defend Hackernews and glorify it a few days ago when an much more experienced guy called HN like 4chan.
I am a teenager, I don't know why I like saying this but the point is, most teenagers aren't like me (that I know), it has both its ups and downs (I should study chemistry right now) but Hackernews culture was something that inspired me to being the guy who feels confidence in tinkering/making computers "do" what he wants (mostly for personal use/prototyping so I do use some parts of AI, you can read one of my other comments on why I believe even as an AI hater, prototyping might make sense with AI/personal-use for the most part, my opinion's nuanced)
I came to hackernews because I wanted to escape dead internet theory in the first place. I saw people doing some crazy things in here reading comments this long while commuting from school was a vibe.
I am probably gonna migrate to lemmy/bluesky/the federated land. My issue with them is that the ratio of political messages : tech content is few (And I love me some geopolitics but quite frankly I am tired and I just want to relax)
But the lure of Hackernews is way too much, which is why you still see me here :)
I don't really know what the community can do about bots.
Another part is that there is this model on Localllama which I discovered the other day which works opposite (so it can convert LLM looking text to human and actually bypasses some bot checking and also the -- I think)
Grok (I hate grok) produces some insanely real looking texts, it still has -- but I do feel like if one even removes it and modifies it just a bit (whether using localllama or others), you got yourself genuine propaganda machine.
I was part of a Discord AI server and I was shocked to hear that people had built their own LLM/finetunes and running them and they actually experimented with 2-3 people and none were able to detect.
I genuinely don't know how to prevents bots in here and how to prevent false positives.
I lost my mind 3 days ago when this happened. Had to calm myself and I am trying to use Hackernews (less) frequently, I just don't know what to say but I hope y'all realize how it put a bad taste into my mouth & why I feel a little unengaged now.
Honestly, I am feeling like writing my own blogs to my website from my previous hackernews comments. They might deserve a better place too.
Oops wrote way too long of a message, so sorry about that man but I just went with the flow and thanks man for writing this comment so that i can finally have this one comment to try to explain how I was feeling man.
If you define social networks as a graph of connections, fair enough - there's no graph. It is social media though.
HN is Social in the sense that it relies on (mostly) humans considering what other humans would find interesting and posting/commenting for for the social value (and karma) that generates. Text and links are obviously media.
There seems to be an insinuation that HN isn't in the same category as other aggregators and algorithmic feeds. It's not always easy to detect but the bots are definitely among us. HN isn't immune to slop, its just fairly good at filtering the obvious stuff.
Show HN: Minikv – Distributed key-value and object store in Rust (Raft, S3 API) | https://news.ycombinator.com/item?id=46661308
Commenter: > What % of the code is written by you and what % is written by ai
OP: > Good question!
>
> All the code, architecture, logic, and design in minikv were written by me, 100% by hand. I did use AI tools only for a small part of the documentation—specifically the README, LEARNING.md, and RAM_COMMUNITY.md files—to help structure the content and improve clarity. >
> But for all the source code (Rust), tests, and implementation, I wrote everything myself, reviewing and designing every part. >
> Let me know if you want details or want to look at a specific part of the code!
Oof. That is pretty damning.
———
It’s unfortunate that em-dashes have become a shibboleth for AI-generated text. I love em-dashes, and iPhones automatically turn a double dash ( -- ) into an em dash.
I've seen, 17 year ago, a schoolboy make "his own OS", which was simply Ubuntu with replaced logos. He got on TV with it, IIRC he was promoting it on the internets (forums back then), and kept insisting that this was his own work. He was bullied in response and in a few weeks disappeared from the nets.
What has it to do with me personally, if I'm not the author, nor a bully? Today I learned that I can't trust the new libraries posted in official repos. They can be just wrapper code slop. In 2012, Jack Diedrich in his speech "Stop Writing Classes" said that he'd read every library source code to find if there was anything stinky. I used to think it's a luxury of time and his qualification to read into what you use. Now it became a necessity, at least for new projects.
I don’t think LLMs and video/image models are a negative at all. And it’s shocking to me that more people don’t share this viewpoint.
The 'Dead Internet' (specifically AI-generated SEO slop) has effectively broken traditional keyword search (BM25/TF-IDF). Bad actors can now generate thousands of product descriptions that mathematically match a user's query perfectly but are semantically garbage/fake.
We had to pivot our entire discovery stack to Semantic Search (Vector Embeddings) sooner than planned. Not just for better recommendations, but as an adversarial filter.
When you match based on intent vectors rather than token overlap, the 'synthetic noise' gets filtered out naturally because the machine understands the context, not just the string match. Semantic search is becoming the only firewall against the dead internet.
Often I do want exact matches, and Google refuses to show them no matter what special characters you use to try to modify the search behaviour.
Personally I'd rather search engines continue to return exact matches and just de-rank content that has poor reputation, and if I want to have a more free-form experience I'll use LLMs instead.
"We have been waiting 20 minutes!"
"You're absolutely right, and I apologize. We will try to schedule better in the future."
I have never said it to someone with whom I was having a regular discussion.
OTOH, I used to overuse em dashes because the Mac made proper typesetting possible. It used to be the sign that someone had read the very useful The Mac is not a Typewriter by Robin Williams.
and it says "You're absolutely right, and I apologize. I have fixed the issue now"
They were trained on you, mate.
I'm thinking stuff like web rings.
Or if you have a blog, maybe also have a curated set of pages you think are good, sort of your bookmarks, that other people can have a look at.
People are still on the internet and making cool stuff, it's just harder to find them nowadays.
Something similar happened in the Podcast and YouTube spheres, where every creator seems to be "sponsored" by these shady companies that allocate 70% of their revenue for creator payouts, for the sake of affiliate marketing.
I really don't know what's solution though.
But really I'm not a professional in this field. I'm sure there are pitfalls in my imagined solution. I just want some traceability from the images used in news articles.
But nope. Instead we have meme coins and speculators...
This was on hn this year, and it was, in classic HN fashion, dismissed as a problem in search of a solution. Well, perhaps people in this thread will think differently
would someone benefit from demonstraing a photo is real?
The top usecase I can think of it to ensure AI is trained on real photos. Any upside for humans?
Paying creators is the dumbest and most consequential aspect of modern media. There is no reason to reward creators, zero. They should actually be paying Youtube for access to their audience. They actually would pay to be seen, paying them is both stupid and unnecessary. Kill the incentives and you kill the cancer.
There may be some irony to be found in this human centipede.
Those sound funny; why would they make you sad?
[1] Those "crappy websites" with a maze of iframes are actually considered surprisingly refreshing today.
everybody is either trying to promote some product, or promote themselves as a "content creator" so they can start getting influencer marketing deals and payouts from youtube or instagram ad splits.
the internet was at it's best when most of the content on it was just because somebody had some information, and wanted to share it. that intent is still out there today, but unfortunately it's harder and harder to find buried amongst all the revenue generation.
Yeah, I especially hate how paranoid everyone is (but rightly so). I am constantly suspicious of others' perfectly original work being AI, and others are constantly suspicious of my work being AI.
The Internet has never been dead. Or alive. Ever since it escaped its comfortable cage in the university / military / small-clique-of-corporations ecosystem and became a thing "anyone" can see and publish on, there has forever been a push-pull between "People wanting to use this to solve their problems" and "People wanting eyeballs on their content, no matter the reason." We're just in an interesting local minimum where the ability to auto-generate human-shaped content has momentarily overtaken the tools search engines (and people with their own brains) use to filter useful from useless, and nobody has yet come up with the PageRank-equivalent nuclear weapon to swing the equation back again.
I'm giving it time, and until it happens I'm using a smaller list of curated sites I mostly trust to get me answers or engage with people I know IRL (as well as Mastodon, which mostly escapes these effects by being bad at transiting novelty from server to server), because thanks to the domain name ownership model pedigree of site ownership still mostly matters.
This is the modern epistemic crisis. And wait till Elon implants a brain computer interface in you. You won't even fully trust your eye looking through a telescope.
I mean sure, the next step will probably be "your ads have been seen by x real users and here are their names, emails, and mobile numbers" :(
As well as verification there must be teams at Reddit/LinkedIn/Whereever working ways to identify ai content so it can be de-ranked.
1) to satisfy investors, companies require continual growth in engagement and users
2) the population isn't rocketing upwards on a year-over-year basis
3) the % of the population that is online has saturated
4) there are only so many hours in the day
Inevitably, in order to maintain growth in engagement (comments, posts, likes, etc.), it will have to become automated. Are we there already? Maybe. Regardless, any system which requires continual growth has to automate, and the investor expectations for the internet economy require it, and therefore it has or soon will automate.
Not saying it's not bad, just that it's not surprising.
Unfortunately, less privileged users will have to endure the sea of AI content that still preys on the unauthenticated. It will be like using the web without an ad blocker, but 1000X worse.
There are many compounding factors but I experienced the live internet and what we have today is dead.
So I go on my province's subreddit. Politicswise, if there was an election today the incumbent politician would increase their majority and may even be looking at true majority. Hugely popular.
If you find a political thread, there will be 500 comments all agreeing with each other that the incumbent is evil, 50 comments downvoted and censored because they dare have an opinion that agrees with the incumbent, 100 comments deleted by anonymous mods banning people for what reason? Enjoy your echo chamber.
Anyone who experiences being censored a few times will just stop posting. Then when the election happens they have no idea at all why people would ever vote that way because they have never seen anyone do anything but agree with their opinion.
What an utterly dead subreddit.
About 10 years ago we had a scenario where bots probably were only 2-5% of the conversation and they absolutely dominated all discussion. Having a tiny coordinated minority in a vast sea of uncoordinated people is 100x more manipulative than having a dead internet. If you ever pointed out that we were being botted, everyone would ignore you or pretend you were crazy. It didn’t even matter that the Head of the FBI came out and said we were being manipulated by bots. Everyone laughed at him the same way.
This was definitively not the case on HackerNews.
I agree that anonymization makes people more hostile to others, but I doubt the de-anonymization is the solution. Old school forums and IRC channels were, _mostly_, safe because they were (a) small, (b) local, and (c) usually had offline meetups.
Maybe the future will be dystopian and talking to a bot to achieve a given task will be a skill? When we reach the point that people actually hate bots, maybe that will be a turning point?
I think old school meetups, user groups, etc, will come back again, and then, more private communication channels between these groups (due to geographic distance).