I miss the text only reading era. This is a blog and should not need to have JavaScript enabled to render text to a page. I would rather not have to be annoyed by flavor of the month duplicate scroll bars, cookie banners, newsletter pop-ups 5 seconds in, scroll to the top pop-ups, idle overlays, highlight helper bars that break copy paste, etc. This blog didn't have all of those but had some. I'm sure the metrics look great because I had to load this page four times. First initially, and then disabling JavaScript and realizing it doesn't load anything at all. A third time re-enabling JavaScript and then deleting all the annoying elements, and then a fourth time to make sure my cosmetic filter is applied correctly. 4x the interactions! Must be doing something right.
There are some people that believe that writing is an act of creative expression. In other words, that writing is primarily about the act (and as such, it's a quite selfish activity). Editing destroys the expressive act and must be avoided.
These people's writing is usually incoherent and they are very proud of it. If you've ever read a bad new-age self-help book you've probably encountered writing like this.
Good writers understand that writing is about communication. The initial act of writing (ie, word puke) is worthless. What matters most is a piece of writing's ability to communicate clearly.
This writing is usually pleasant, concise, and clear.
"I think that is the beauty of writing, the raw , unedited emotions of the person behind every words either for entertainment or educational purposes, is what makes it special"
- the article, clearly expressing the intent of its own mistakes and contextualizing them in the era of LLM-borne "perfect" text
I appreciate the sentiment, and good for him. However, from an audience perspective, why choose to watch a guy filming himself eating cereal with a shaky phone camera when you could watch The Sopranos? (or the latest MrBeast extravaganza, to avoid being pedantic).
I guess it's OK if you enjoy reading someone expressing himself without communicating anything valuable and well produced. It's kind of like people who enjoy stream-of-consciousness poetry or unhinged personal blog posts. It's fine.
But most of us (I think) read for our own gain, expecting substantial / stimulating text that is ideally well researched and serves a clear purpose.
Something like that needs an editor, effective proofreading, and quite some time of work and rework.
At this point, it is far more distracting to see LLM-isms and get completely thrown out of the reading-understanding process than to see some typos or grammatical errors. I actually feel reassured when I see something like a "they're/their" swap, because I know I am reading the author's thoughts instead of some linear algebra vaguely influenced by the author's thoughts.
Five years ago, I probably would have been annoyed by the same.
While I can get behind the sentiment I hope bad writing doesn't become the standard for anti AI. A simple grammar check would have greatly improved this post.
The relative value of those things are shifting. As the cost of polished LLM drivel falls to zero, some might prefer even the most unedited, off-the-cuff human writing to the slop.
Indeed. I for one enjoyed this piece. Yes, it had errors and lots of odd grammatical choices, but the reading remained affordably challenging and the prose had a newness to it.
I work mostly on the tech side of things but my corporate limitation has always been writing up documentation, communicating/translating to stakeholders, and recalling everything relevant when writing PR descriptions. AI has been a breath of fresh air. I actually communicate more information efficiently than I would have ever put the effort into before. I still maintain my own writing for more casual things like social media (HN included) and low stakes Slack conversations but AI for getting across ideas and then proofreading it is great.
I was asked to write user stories about a complex topic where I’m the SME at work. I spent two hours info dumping everything I knew about the project, everything the AI wouldn’t have any context for, using Cursor to add related projects to the workspace, tagging specific files where we’d implemented similar things with our styles, noted all the quirks of the system and how it works and where to find relevant information. I spent a lot of time on it, and then asked it to reach out using cli to grab relevant information from our infra, and write stories about how we’d accomplish everything I intend to get done. I then spent another few hours reviewing the 45 or so stories that conversation generated. It was similar to how I’d talk to a new contractor I’m onboarding to work on the work.
I have a deep knowledge of the information, have done the process we’re doing on two previous projects, but organizing all the stories would have been an absolute nightmare. I still spent half a day on this, I’d guess the fatigue from the boring parts would have made this take a week or maybe two, just because I was doing the parts I enjoy (knowing things and describing them) and I was able to offload the parts I’m not great at (using a lot of boilerplate language to organize the info I knew into scrum stories). Then I had a meeting, reviewed the stories with my coworkers, we had a discussion, deleted two or three of them that we determined weren’t necessary, and fixed up one or two where I’d provided insufficient information about some context surrounding coloring of a page.
It burned through a ton of Opus 4.6 tokens, looked through a ton of code (mostly that I’d written, pre-LLM), but has been amazing for helping me move into a lead position where grooming stories and being organized has always been my weakest point.
Also, when I wrote a postmortem for a deploy that had some issues, I wrote it all by hand. You have to know when the tools help and when they will hinder.
It’s kinda useful to me for the following three reasons:
- spelling
- grammar or weird grammar as English is not my native language
- read proofing and finding things that do not make sense in terms of sentence structure
I do not use it for ideas, discussing the writing, or anything else because that beats the purpose of writing it myself (creative writing).
I thought it's quite good. Of course, I'm not taking 100% of output, but it takes care of my grammar blindspots (damn you commas and a/an/the articles!).
Can you please share what and how gets degraded? Sometimes I don't like a phrase it selects, but it's not common
Well, for one example, it inhibits your desire to improve against those very blind spots. In exchange for that your audience gets 3-4x length normalized bullshit to read instead.
AI can take a rough draft, clean it up and shorten it as much as you want. The suggestions very often expose ambiguities in the original text. If you think the LLM got it wrong, it’s nearly often the LLM overreading some feature of the original that you failed to catch, which is precisely what you’d want out of your proofreader.
Yes, LLMs reduce the individual charm of prose, but the critique itself carries a romantic notion that we all loved the idiosyncratic failures of convention and meaning which went into highly identifiable personal styles, and which often go missing from LLM-edited work.
> Well, for one example, it inhibits your desire to improve against those very blind spots.
I'd contend this is not true. Even professional authors go to an editor who identifies things that need to be fixed. As the author of the text and knowing what it should be, it can be difficult to read what you wrote to find those mistakes.
> In exchange for that your audience gets 3-4x length normalized bullshit to read instead.
This is not at all what is implied by having an AI act as an editor. Identifying misplaced commas, incorrect subject verb agreement (e.g. counts), and incomplete ideas left in as sentence fragments.
You appear to be implying that the author is giving agency to create the content to the AI rather than using it as a tool to act as a super-charged grammerly.
> Even professional authors go to an editor who identifies things that need to be fixed.
Yes, and these people are good at it. What’s your point?
If you need grammar checking, there are thousands of apps including word processors, web browsers and even most mobile devices that will check your inputs for grammar and spelling mistakes as you type. All of that without burning down the rainforests or neutering your thesis.
> it takes care of my grammar blindspots (damn you commas and a/an/the articles!)
There are plenty of pre-LLM tools that can fix grammar issues.
> Can you please share what and how gets degraded?
I'm not the person you asked, but IMO LLMs suck the style and voice out of the written word. It is the verbal equivalent of photos that show you an average of what people look like, see for example:
As definitionally average the results are not bad but they are also entirely unremarkable, bland, milquetoast. Whether or not this result is a degradation will vary, of course, as some people write a lot worse than bland.
In many kinds of writing, perhaps most, communicating your state of mind to the reader is a primary goal. Even a smart LLM fundamentally degrades this, because to whatever degree that it has a mind it isn't shaped like yours or mine. I've had a number of experiences this year where I get to the end of a grammatical, well-structured technical document, only to find that it was completely useless because it recited a bunch of facts and analyses but failed to convey what the author was thinking as they wrote it.
(Of course, that may well be exactly what you're looking for if you're writing an audit report or something.)
This sounds like an ESL issue. LLMs are good at proof reading ESL-written English text. They are not as good at proof reading experience English writers.
Only if you don't understand how to control AI. If you understands how it works and have the skills to ride it like a wild horse, you can make yourself a 10x developer. Its maybe a bit of an insult, but you seriously have to change that mindset. AI is not going to be worse tomorrow. It will get better and it will dramatically change our life as developers. Code will no longer be a prominent thing we are working on in the near future.
I actually find Gmail a better editor/grammar check then LLMs. It makes isolated simplifications/corrections that imo have minimal style impact and just focus on clarifying phrasing.
What does it say about me that when I run my writing through one of those "detect if AI" tools I seldom see a value of less than 70% confidence that the writing was AI generated?
It baffles me when I see ostensibly smart people refusing to click shift. Especially programmers. I know you can do it! I've seen you use curly brackets!
I really don't see how this can be possible unless they're accepting abysmal recall? Perhaps I'm missing something fundamental here, but the idea that AI and non-AI assisted text can be separated with "nearly 0 false positives" just says to me that it's really just a filter for the weakest, most obvious AI generated text. Is that valuable?
Simple: The derived variance in your word usage and sequences, is outside the mean distribution range, that would be labeled as AI generated, given this specific evaluation algorithm
It’s not nondeterministic
you can probably do the shannon entropy calculation yourself if you understand what the evaluation algorithm is
That said…if the evaluator is non-deterministic, then there’s no value in the estimate anyway
I haven't tried my HN comments; I've only tried things spanning more than a few sentences and that I've put more effort into. I only discovered this when my son put an e-mail I wrote to his teacher that he was CC'd on into the tool on his school iPad.
about you? not much. but i wouldnt spin up a blog, or even longer comments here, if you want to keep your sanity.
the amount of "that is obvious ai slop" comments i see on mine or other people's genuine non-ai writing has discouraged me from sharing anything more than roughly a paragraph for probably the rest of my life.
> "..but maybe it's a good thing that most of us don't allow this technology to reframe our thoughts."
No, you're not the only one experiencing this: I too had the same concerns as you: with every new thought, every new creation, I had to ask the AI's opinion, as if I were no longer able to judge, to decide, without consulting the AI (...just to be safe, you never know...).
The only way to regain your creative ability is to write down your thoughts yourself, read, reread, rewrite, correct, express your opinion...
depending how hard the "the brain is a muscle" saying applies, there is no way using LLMs/chatbot systems/AI is not going to deteriorate your brain immensely.
In I,Robot, Will Smith prefers to drive himself because he doesn't trust AI. But we are moving towards self driving as it would be more safer. Would you trust a calculation more if it was done by hand using log tables? Having vehicles allowed us to create sports like dirt bike riding or monster truck racing. Yes something is lost but something is also gained. We move up the layer of abstraction.
when i was younger, we didnt have cellphones. i had ~20-30 phone numbers memorized, at least. i also used to remember my credit card number. my brain has not deteriorated now that i have offloaded that to my phone.
point being: it depends on how you use it. if you offload critical thinking to ai, you will probably (slowly) atrophy your critical thinking muscles. if you offload some bullshit boilerplate or repetitive tasks or whatever, giving you more time overall to do the critical thinking part, you will be fine.
If your body is in good shape, stopping exercise won't make you deteriorate that quickly. What I wonder is, will people get in good shape in the first place.
What I mean is as someone with lots of experience, I don't care about me not learning about the basics anymore as much as someone in their 20s and 30s maybe should.
Not sure what you mean by quickly. Back when I was in racing shape, if I stopped my training plan for as little as two weeks, (probably less actually, but I'm being conservative here) I would have a measurable drop in fitness.
Now, as someone who regularly walks the dog and bikes to work, I've got "less to lose" and probably wouldn't deteriorate as much.
See the recent article suggesting use of navigation apps may correlate in populations to increased Alzheimer’s. Will it happen to you? Maybe, maybe not. Life’s a box of chocolates!
Or read magazines and newspapers from reputable publications. My grammar and writing have improved tremendously from reading quality magazine articles, e.g. stuff from The Atlantic or The NY Book Review or whatever.
Both magazines and books are valid forms of information consumption and books are not the only way to improve your writing, reading, and understanding of the world.
I wouldn't count on current stuff in those publications being free from AI. We're seeing it in peer-reviewed paper submissions so why not in literary forums?
If you limit yourself to stuff from maybe five years ago or older, yeah it's going to be human-written and human-edited (ghostwriting still possible).
Every now and then when I'm reading something, the writer will use a turn of phrase, a specific word, a metaphor, etc, that is unusually clever, or allows me to see the concept in some obtuse light. Or even, they are just able to choose the right words to make something sound musical or rhythmic in some pleasant way. It's intellectually delightful to come across these in writing.
I've never been surprised at AI writing. Emotion the biggest part of communication and these grey boxes have none.
I am not a native speaker, for anything like HN comments I don't use AI, but I see no harm in using AI in correcting grammar and maybe some wording, but the ultimate change shouldn't be a copy-paste replacement, it should be well thought through by the author.
I feel like asking it to polish or rewrite is going too far. Using it for a grammar/spell checker or thesaurus is fine, though. At least that preserves ones voice.
And I've definitely used it when I can't remember that one stinking word that I know exists and is perfect for this occasion.
This is exactly same struggle for me. Writing technical content about PostgreSQL and balancing my voice without sounding like LLM written is genuinely difficult.
As English is not my first language, I do run into problem where the line between fix my clumsy sentence and rewrite my thought is very thin. Same with writing "boring" technical explanation and more approachable content. I'm getting pushed back for both.
I’ll take a clumsy sentence written by a non-native speaker any day over LLM generated mush. At least I know you chose those words specifically so it gives me some insight into your state of mind and intended meaning.
Any native English speaker who doesn’t live under a rock is very accustomed to reading and hearing English from non-native speakers and familiar with the common quirks and mistakes. English is quite forgiving as a language, we understand you. When in doubt, simplify it.
it's a couple mutually-conflicting languages in a trenchcoat; forgiveness and flexibility are perhaps its defining properties.
To the broader issue: "polish" (in any language) is only valuable insofar as it makes the ideas clearer, attests to innate qualities of the author and/or the investment of their time, or carries its own aesthetic value. As LLMs make (a certain kind of polish) cheap to produce, the value of the middle category attenuates to nothing.
In some specific work contexts, such as writing pull request descriptions, not sounding like AI is something I've given up on trying to optimize. It's simply not worth the effort for me being non-native and writing detailed PR descriptions being so arduous, and the agent already has full context anyway. Obviously any fluff or inaccuracies are aggressively weeded out but I don't care anymore about the AI voice.
> any fluff or inaccuracies are aggressively weeded out
this work is paramount. Without clear evidence of human filtering, a long, well formatted message/PR/doc is likely to reduce my estimate of the value/veracity/relevance of its content.
This. My personal style have always been llm-like, including the generous use of em-dashes, and "not-only-this-that" style mannerisms. It' increasingly difficult to retain reputation.
It's not that simple. LLMs were trained on lots of writing, and the "LLM voice" resembles in many ways good English prose, or at least effective public communications voice.
For years, even before LLMs, there have been trends of varied popularity to, for lack of a better word, regress - intentionally omitting capitalization, punctuation, or other important details which convey meaning. I rejected those, and likewise I reject the call to omit the emdash or otherwise alter my own manner of speaking - a manner cultivated through 30+ years of reading and writing English text.
If content is intellectually lacking, call that out, but I am absolutely sick of people calling out writing because they "think it's LLM-written". I'm sick of review tools giving false positives and calling students' work "AI written" because they used eloquent words instead of Up Goer Five[0] vocabulary.
I am just as afraid of a society where we all dumb ourselves down to not appear as machines as I am of one where machine-generated spam overtakes all human messaging.
Well that isn't what I am suggesting. I'm suggesting people ditch x. Reddit. Probably also ditch hn in the next couple months. If you can run a headless agent to post somewhere, just don't bother visiting that site, honestly a great rule of thumb right there.
That should leave you with media sources like nyt and your local library, which seems healthier to me. And maybe it might encourage a new type of forum to emerge where there is some decentralized vetting that you are a human, like verifying by inputting the random hash posted outside the local maker space.
I hope editorial departments everywhere are taking careful notes on the ars technica fiasco. Agree there's room for some kind of quick "verified human" checkmark. It would at least give readers the ability to quickly filter, and eliminate all the spurious "this sounds like vibeslop" accusations.
One of the most common criticisms is the use of the emdash. This is a classic bit of English prose that is not problematic except as a stereotype used to dismiss writing for form rather than for content.
Let's grab a few books off the shelf (literally).
Douglas Adams' The Hitchhiker's Guide to the Galaxy has four emdashes on the very first page:
> It is also the story of a book, a book called THGTTG - not an Earth book, never...
Isaac Asimov's classic The Last Question: three emdashes on the first page (as printed in The Complete Stories, Volume I)
> ...they knew what lay behind the cold, clicking, flashing face -- miles and miles of face -- of that giant computer.
Mark Z. Danielewski, House of Leaves: Three emdashes on page 1
> Much like its subject, The Navidson Record itself is also uneasily contained -- whether by category or lection.
Robert Caro, Master of the Senate: Five emdashes on page one
> Its drab tan damask walls...were unrelieved by even a single touch of color -- no painting, no mural -- or, seemingly, by any other ornament
Other pages 1s:
* Murakami - 1Q84: 1
* Murray/Cox - Apollo: 1
* Meadows - Thinking in Systems: 1
* Dostoyevsky - The Brothers Karamazov (Pevear/Volokhonsky translation): 4
* Caro - The Power Broker: 5
* Hofstadter - Godel, Escher, Bach - 3
Honestly, when I started this post I expected to have to dig deeper than page 1. The emdash is an important part of English-language literature and I reject the claim that we should ignore all writing that contains it.
No one is asking that we reject all prose with emdash. Not all emdash-users are LLMs, but many LLMs are profligate emdash-users, so adjust your priors accordingly.
Secondarily, I think there's a part of the discourse missing: the presence of a syntactic emdash in a sentence on the internet is not itself a strong signal of LLM-writing - but the presence of an actual emdash glyph (—) should raise some eyebrows, esp. in fora that aren't commonly authored in rich text editors (here, twitter, ...)
I think that AI will accelerate an already existing trend that pre dates AI meaning the global regression to the mean we're seeing in any creative field, from design to videogames, from cars to fashion.
I find this similar to when photography was invented and painters moved away from realism trying to find originality and creativity and they produced modern art which for many of us just looks silly.
>Although 80 % of the content was my own writing, the fact that it was run in a LLM enginee for grammar and vocabulary cross-check, made it failed the "probable written by AI " metric; and it was rejected.
should be:
>Although 80% of the content was my own writing, the fact that it was run through an LLM engine for grammar and vocabulary cross-checking meant that it failed the "probably written by AI" metric, and it was rejected.
1. 80 % -> 80%
2. in -> through
3. a LLM -> an LLM
4. enginee -> engine
5. cross-check -> cross-checking
6. cross-checking, -> cross-checking (removed the comma)
7. made it failed -> meant that it failed, (or "made it fail" depending on whether you want to preserve the past tense or preserve the word "made")
8. probable -> probably
9. by AI " -> by AI"
10. ; and it was -> , and it was (no need for a semicolon when linking with a conjunction like "and", and I would consider another word or phrase such as ", and, as a result, it was rejected" to emphasize the causal relationship between the clauses)
That's ten corrections that are fixing straightforward typos and/or grammar and vocab mistakes in one sentence. Most are fairly objective, though I can understand different opinions on 2, 7, or maybe 10.
Relying on AI for editing seems to have atrophied the author's writing if that is what he or she thinks is worth publishing on a blog like this. I would suggest practicing editing your own work and not even thinking about passing it through AI (especially when you were told not to use any AI!) to edit for a while. Given that English is not your first (or even second or third) language, I would also suggest having a native speaker with some demonstrable writing skill review your writing and give feedback on how to make it more idiomatic. For example, writing being "run through an LLM" rather than "run in an LLM" is a relatively subtle difference compared to the others, and it's very very common for preposition mistakes like this to show up when writing in another language than your first. I am still hopeless with French prepositions.
Are grammatical errors and typos fashionable now? Reading this post it seems the anti-thesis in the LLM era is not to edit at all, but rather write down a stream of consciousness to make it "personal".
When writing letters of recommendation now, I write in a more human tone to avoid sounding like a bot with a line of explanation at the start. Not an error in the sense you mean, but an error in tone for a letter of recommendation, certainly.
I feel like having to signal that you're a human detracts from the content side of things. Proper spelling and grammar, good style etc. are there to help you convey your ideas more accurately. Resorting to a stream of consciousness style of unrefined writing makes it apparent that you're a human, but the downside is that your text is bad.
Oh no, I have had enough of people with quirky (i.e. cringey) writing on the internet. It started with those who refused to use their shift key and it's quickly devolving into something that makes you shiver when you read it. (Not to mention how easy it is to use a system prompt to make an AI write in whatever style you like.)
An awful lot of stuff in the "hand made" aesthetic are made by machine and factory too, and I suspect a similar thing will happen to any popular writing aesthetic that attempts to avoid being automated away.
Personally, I'll just continue to use my own voice. I try to correct spelling and grammar mistakes, and proof-read my writing before posting.
It's not perfect, and my writing can at times be idiosyncratic, but it's my voice and it's all I've got left.
But don't be mistaken in thinking that those mistakes make it better, it just makes it mine.
I never use an LLM to paraphrase my own voice as a matter of principle, but I’ve still been repeatedly accused of doing so because I happen to always have written structured posts, used “smart quotes,” and done that negative comparison thing (it’s genuinely not just fluff, it’s a genuinely useful way to— ah god damn it). Sigh.
Right. The LLMs' quirks aren't bad in themselves, they're bad when they're in every damn paragraph. They're mostly things that in moderation actually improve writing, and that if you see them once (without the knowledge that they're things LLMs do) would rightly tend to make you think better of the author. And so, of course, in RLHF training they get rewarded, and unfortunately it's not so easy for an LLM to learn "it's good to do this thing a bit but not too much.
The structured thing you mention is the one that bugs me most. I genuinely think that most human writing would be improved by having more of the "signposts" that LLMs overuse. Headings, context-setting sentences, bullet points where appropriate, etc. I was doing "list of bullet points with boldfaced intro for each one" before the LLMs were. But because the LLMs are saturating their writing with it, we'll all learn to take it as a sign of glib superficiality and inauthenticity, and typical good human writing will start avoiding everything of that kind, and therefore get that little bit harder to read. Alas.
It's absolutely shocking how many people think that inverting all the quality metrics that we've traditionally used "because LLMs" will lead to good things. Nothing about this will end well.
I feel ya. I've never been accused of using an LLM, fortunately, but depending on the context I do use “smart quotes” (even in „Dutch” or »German«) and the em-dash obviously… (And that ellips fella there. It's just so simple to type with a compose key set up.)
Same here, I've always used em dashes and have been called out on negative comparisons – I didn't even know they were an LLM thing. Should I read more LLM to know what phraseology to avoid, or will doing that nudge me towards sounding more LLM? :-(
I have been writing stuff for a long time; my first internet experience was posting on forums about a Gameboy Advance game. Then in other forums, for a philosophy degree, and professionally as a copywriter and technical writer. I’ve been meaning to write up a post of my thoughts on writing and AI, but there things I’ve been thinking recently are:
1. There was a lot of slop pre-AI. In fact I’d say the majority of published writing was bad, formulaic, and just written to manipulate your emotions. So in some sense, I don’t really think pre-AI slop had more value. It’s just cheaper to make now.
2. AI has prompted me to study more off-beat writers that followed the rules of language a little less frequently. This includes a lot of people from circa 1890-1970, when experimenting with form was really in vogue.
3. Which brings me to my third point, which is that no matter how much the AI actually knows about writing, the person prompting it is limited by their own education and knowledge of writers. You can’t say, “make me a post in the style of Burroughs” if you don’t know who Burroughs was, or what his writing style was. So in a sense there is an increased importance to being educated about writing itself. Without it you’re limited in your ability to use AIs to write stuff and in your awareness of how much your non-AI written work is influenced by AI writing.
I've been a Grammarly customer for quite some time, and I have tried the AI suggestions, but it always loses something and ends up with a whiny, apologetic tone.
I am sorry but perhaps some use of AI or grammar-check would help? A lawn that's not overly manicured has its charm, but if it has one too many barren patches of clumps of overgrown grass, it doesn't appeal as much? This essay feels a bit like that.
I'd push back on the author and ask him really if his writing is getting worse or his standards have increased, leading to undue stress that might throw the flow state off.
This is an interface, not an LLM. Do they say which LLM they use? Many of these are interfaces to one of the big three model providers. Others run through OpenRouter to use one of the better open models, all of which have their own quirks.
Once I think something is AI I just can’t read it anymore. It isn’t out of principle or anything, I just become so distracted by the idea that I can’t focus or derive any benefit or pleasure from continuing.
It’s largely a problem of how these tools are packaged, but while it’s certainly nice to have an LLM check your spelling, or review your grammar or style or usage, you should never allow them to actually edit your document directly.
First of all, they will make substantive changes you didn’t intend. The meaning will get changed, errors will be introduced. Tone will be off, and as the author says, your voice will disappear. There is no single “correct” way to write something. And voice and tone are conveyed with grammatical and usage variation. Don’t give that up to a robotic average.
Secondly, you will never improve, or even maintain, your own writing skills if you don’t actively engage with the suggested changes. You also won’t fully realize half the purpose of writing, which is to understand the topic better yourself. Doing the work of editing your piece will help you understand the subject even better. If you just let the machine “fix” your errors, you’ll become a worse writer and less of an expert over time.
> This post, is written without any tools assistance I just wrote what my brain is instructing to type (might not reread it before posting).
How is the author complaining about the quality of their own writing while admitting to not even bothering reading what they wrote, let alone editing it?
(Also, why would using a LLM based grammar checker trigger an AI writing detector? Did it end up rewriting substantial parts of the original submission?)
Because they're self-aware perfectionists and are actively working to stop it, because they reach for all kinds of tools like grammar checkers and AI, but they're aware that using those will make the post lose "their" voice, or the human element of the post.
And that's, I think, a valid choice; you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry? That's for formal writing, and blog posts are not formal.
Reading what you write for editing does not make a text lose your voice. If anything, it amplifies it, you get to ensure that what you intended to say was said.
Not reading what you write smells more like laziness.
Same thing for spell checks, grammar checks, and even AI usage. If you use things lazily, the result will be lazy as well.
Instead of asking for an AI tool to write your thoughts in your place, you can write it yourself and ask it to criticize your text, instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.
But that of course would require more work. Asking ChatGPT to produce a text based on a lazily written, bullet point list of brainfarts is probably easier.
> instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.
LLM cant really do that. It can help you produce correct sentence where you struggle to create own, but it does not have capabilities to do what you suggest.
LLMs definitely can do this. The output tends to be overly positive though, claiming that any sort of rough draft you give them is "great, almost ready for publishing!". But the feedback you can get on clarity, narrative flow, weak spots... _is_ usually pretty good.
Now, following that feedback to the letter is going to end up with a diluted message and boring voice, so it's up to you to do with the feedback whatever you think best.
What? LLMs are very capable of doing sentiment analysis. Hell, it's basically one of the things it actually excels at - understanding tone, nuance, context, etc.
I used it many times for exactly this, with good results. It points out ambiguous contructs, parts that are dissonant from the tone I intend, etc.
I have no idea why you think that LLMs can't do that lol
Sentiment analysis for the purpose of categorizing reddit comments, sure. For the purpose of giving you advice about nuance, overall clarity and tone of own long test, no.
I tried it myself, and it did actually a good job.
There's nothing magical about a long text you write yourself vs a stream o reddit comments in a thread. It's all sentiment analysis on text. It can extract ambiguity, how ideas are connected in the context, categorize and summarize, etc.
You should try it and see it for yourself. Feed it some large text of a single author and ask it to do those things, see if the results are satisfactory.
If you use grammar checker as a grammar checker, it wont make you loose your voice. It will make you use correct grammar.
> you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry
If it is dry, then it is not stylistically perfect. Per definition, dry writing is just an imperfect writing. Stylistically perfect writing does not have to be dry and usually is not dry.
What happens here is that people use "stylistically perfect" when they mean "followed a bad stylistic advice".
I see both sides here. Wanting to preserve your natural voice is valid, but editing and using tools don't necessarily take that away. In fact, they can help make your intended message clearer. It probably comes down to how much control you keep over the final result rather than wheater you use tools at all.
What annoys me here is that people say "I use AI as style checker to make my writing better" or claim that good writing is unfairly judged as being by AI ... and then proceed to describe inferior writing results they achieved with AI. None of what author wrote there signals that the way he uses AI made his writing better. His use of AI made his output inferior. And not just in a the "loosing own voice" way worst, but literally in the "the final text is less effective writing".
I do not mean this comment to be kick against AI. It is very good for some stuff, it is less good for other stuff. What annoys me is someone calling output superior while actually complaining about it being inferior.
Hey, maybe that llm needs to be used differently to achieve actually good writing results.
There is no reliable way to detect AI writing. It probably trains on texts known to be AI, on texts known to be written by humans, then classify the text according to this training.
The problem is that it has a pretty high false positive rate. Maybe it thinks it's AI because there are absolutely no spelling mistakes. Or maybe you're French and you use latin-roots words in English that are considered "too smart" for the average writer.
And the problem is that people run those tools, see "80% chance to be written by AI", and instead of considering that 20% is high enough to consider you don't know, will assume it's definitely written by AI.
Exactly. Depending on what nutrians I've been consuming, the Indians/intelligence in my head could also be artificial. Perhaps that's why I fail those captcha tests most of the time.
Yes, these people are so unbelievably stupid that think others more intelligent than them can't tell when they use AI to write their stuff. And then they act so annoyed when they get exposed... It's unbearable.
The article here is still full of AI slop, and so many people in the comments are defending the author. Blows my mind.
> Also, why would using a LLM based grammar checker trigger an AI writing detector? Did it end up rewriting substantial parts of the original submission?)
Grammarly has seriously started rewriting whole paragraphs recently, I have been having to reject more and more "prompts" where in the past I would accept them almost by default because they actually were Grammer checks.
There are a bunch of typos in there which jar a bit ('deterioted'), but I guess that makes sense for this specific article.
Personally, I would recommend them to simple use any old editor with spellchecking enabled. That suffices for most writing where you just want to keep your own voice. To me, the red crinkly line just means that I should edit that word myself. In the rare case where I'm stumped on the spelling I'll look at the suggested edit of course, but never as a matter of course.
The problem here, the overarching issue is that the subject complaint about AI slop is actually a bigger issue that has been plaguing America in particular for many years now, and of which the AI slop era is only a current top. The qualities of American writing have clearly been on a precipitous decline for a very long time now, predating AI slop and even spell checkers and computers.
Computers, digital text, and digital information distribution/transportation have made writing and thoughts cheap. Arguably due to what we are surely all aware of, humans rarely value that which is cheap, whether monetarily or in effort and consequential qualities. What people seem reluctant or maybe unable to acknowledge is that predating the current AI Slop, was what could be called Human Slop, low quality, low effort, careless output that was cheap; regardless of whether AI slop now outperforms.
It is why you are justified in pointing out that even in the post complaining about AI Slop, the human has apparently abandoned what would have been common practice in just the recent past, using basic spellcheckers or simply reviewing what was written and also practicing with deliberation; the art and skill of writing, grammar, and sentence structure.
No one is perfect and that is also what makes anything human, somewhat inexplicable and random variation. However, it takes a certain refinement before unique human character becomes a positive quality and is not just humans being sloppy ... human slop.
> The qualities of American writing have clearly been on a precipitous decline for a very long time now, predating AI slop and even spell checkers and computers.
> Every NYT bestseller from 1960 to 2014 falls in the seventh-grade level spread, from 4th to 11th.
> ...
> Since 2000, only 2 bestsellers have scored higher than 9th-grade readability.
> ... ...
> The bestselling authors of our time are writing at the 4th-grade level.
> > “8 books tie for the lowest score,” a 4.4, just above 4th-grade level. Prolific, well-known authors with huge sales: James Patterson, Janet Evonvich, and Nora Roberts.”
> These three authors have written a combined total of 419 books.
I remember taking a machine learning course in which the instructor explicitly warned us to make wise fiscal decisions, based on the assumption that ML funding follows a hype-driven boom/bust cycle.
"Save during the summers and you'll make it through the winters".
I think some spaces will try to retain their value by actively combating LLMs, in the same way they combat hackers and trolls, and if they don't, they'll naturally die.
Several subreddits became AI slop submission repositories and their human engagement dwindled. Some subreddits that were inundated with AI slop implemented policies that ban it, and it seems to work well.
Strict no slop policies work, and surprisingly, so do rules that require AI submissions to be tagged as AI. Forcing slop slingers to tag their slop does a good job at discouraging said slop, it turns out that admitting your slop is slop is embarrassing or something.
Oh well, when the most powerful people on the planet manage to enshittify it enough, we'll be freed from AI...
Or maybe there'll be the elite enjoying the world, while the rest of us have to work manual labor. But at least it'll be AI systems ensuring our compliance!
Had to?
Why would I put effort into reading something that had no effort put in by the author?
This guy needs an editor, AI or otherwise.
These people's writing is usually incoherent and they are very proud of it. If you've ever read a bad new-age self-help book you've probably encountered writing like this.
Good writers understand that writing is about communication. The initial act of writing (ie, word puke) is worthless. What matters most is a piece of writing's ability to communicate clearly.
This writing is usually pleasant, concise, and clear.
- the article, clearly expressing the intent of its own mistakes and contextualizing them in the era of LLM-borne "perfect" text
I guess it's OK if you enjoy reading someone expressing himself without communicating anything valuable and well produced. It's kind of like people who enjoy stream-of-consciousness poetry or unhinged personal blog posts. It's fine.
But most of us (I think) read for our own gain, expecting substantial / stimulating text that is ideally well researched and serves a clear purpose.
Something like that needs an editor, effective proofreading, and quite some time of work and rework.
Five years ago, I probably would have been annoyed by the same.
I have a deep knowledge of the information, have done the process we’re doing on two previous projects, but organizing all the stories would have been an absolute nightmare. I still spent half a day on this, I’d guess the fatigue from the boring parts would have made this take a week or maybe two, just because I was doing the parts I enjoy (knowing things and describing them) and I was able to offload the parts I’m not great at (using a lot of boilerplate language to organize the info I knew into scrum stories). Then I had a meeting, reviewed the stories with my coworkers, we had a discussion, deleted two or three of them that we determined weren’t necessary, and fixed up one or two where I’d provided insufficient information about some context surrounding coloring of a page.
It burned through a ton of Opus 4.6 tokens, looked through a ton of code (mostly that I’d written, pre-LLM), but has been amazing for helping me move into a lead position where grooming stories and being organized has always been my weakest point.
Also, when I wrote a postmortem for a deploy that had some issues, I wrote it all by hand. You have to know when the tools help and when they will hinder.
- spelling - grammar or weird grammar as English is not my native language - read proofing and finding things that do not make sense in terms of sentence structure
I do not use it for ideas, discussing the writing, or anything else because that beats the purpose of writing it myself (creative writing).
Can you please share what and how gets degraded? Sometimes I don't like a phrase it selects, but it's not common
Yes, LLMs reduce the individual charm of prose, but the critique itself carries a romantic notion that we all loved the idiosyncratic failures of convention and meaning which went into highly identifiable personal styles, and which often go missing from LLM-edited work.
I'd contend this is not true. Even professional authors go to an editor who identifies things that need to be fixed. As the author of the text and knowing what it should be, it can be difficult to read what you wrote to find those mistakes.
> In exchange for that your audience gets 3-4x length normalized bullshit to read instead.
This is not at all what is implied by having an AI act as an editor. Identifying misplaced commas, incorrect subject verb agreement (e.g. counts), and incomplete ideas left in as sentence fragments.
You appear to be implying that the author is giving agency to create the content to the AI rather than using it as a tool to act as a super-charged grammerly.
Yes, and these people are good at it. What’s your point?
If you need grammar checking, there are thousands of apps including word processors, web browsers and even most mobile devices that will check your inputs for grammar and spelling mistakes as you type. All of that without burning down the rainforests or neutering your thesis.
There are plenty of pre-LLM tools that can fix grammar issues.
> Can you please share what and how gets degraded?
I'm not the person you asked, but IMO LLMs suck the style and voice out of the written word. It is the verbal equivalent of photos that show you an average of what people look like, see for example:
https://www.artfido.com/this-is-what-the-average-person-look...
As definitionally average the results are not bad but they are also entirely unremarkable, bland, milquetoast. Whether or not this result is a degradation will vary, of course, as some people write a lot worse than bland.
(Of course, that may well be exactly what you're looking for if you're writing an audit report or something.)
This sounds like an ESL issue. LLMs are good at proof reading ESL-written English text. They are not as good at proof reading experience English writers.
And that's not really a hard bar to clear if you look at how people write comments online (including places like GitHub).
Anyone that uses punctuation, and capitalises words, probably automatically gets past the 70% confidence line.
I really don't see how this can be possible unless they're accepting abysmal recall? Perhaps I'm missing something fundamental here, but the idea that AI and non-AI assisted text can be separated with "nearly 0 false positives" just says to me that it's really just a filter for the weakest, most obvious AI generated text. Is that valuable?
It’s not nondeterministic
you can probably do the shannon entropy calculation yourself if you understand what the evaluation algorithm is
That said…if the evaluator is non-deterministic, then there’s no value in the estimate anyway
FWIW, your comment history here does not look like AI at all to me, and I think I have a very (maybe too?) high sensitivity to AI slop.
I really doubt those tools are good for anything
the amount of "that is obvious ai slop" comments i see on mine or other people's genuine non-ai writing has discouraged me from sharing anything more than roughly a paragraph for probably the rest of my life.
> "..but maybe it's a good thing that most of us don't allow this technology to reframe our thoughts."
No, you're not the only one experiencing this: I too had the same concerns as you: with every new thought, every new creation, I had to ask the AI's opinion, as if I were no longer able to judge, to decide, without consulting the AI (...just to be safe, you never know...).
The only way to regain your creative ability is to write down your thoughts yourself, read, reread, rewrite, correct, express your opinion...
What AI can't do is convey emotions.
"the Whispering Earring" – https://gwern.net/doc/fiction/science-fiction/2012-10-03-yva...
point being: it depends on how you use it. if you offload critical thinking to ai, you will probably (slowly) atrophy your critical thinking muscles. if you offload some bullshit boilerplate or repetitive tasks or whatever, giving you more time overall to do the critical thinking part, you will be fine.
What I mean is as someone with lots of experience, I don't care about me not learning about the basics anymore as much as someone in their 20s and 30s maybe should.
Not sure what you mean by quickly. Back when I was in racing shape, if I stopped my training plan for as little as two weeks, (probably less actually, but I'm being conservative here) I would have a measurable drop in fitness.
Now, as someone who regularly walks the dog and bikes to work, I've got "less to lose" and probably wouldn't deteriorate as much.
Both magazines and books are valid forms of information consumption and books are not the only way to improve your writing, reading, and understanding of the world.
If you limit yourself to stuff from maybe five years ago or older, yeah it's going to be human-written and human-edited (ghostwriting still possible).
I've never been surprised at AI writing. Emotion the biggest part of communication and these grey boxes have none.
And I've definitely used it when I can't remember that one stinking word that I know exists and is perfect for this occasion.
"hey robot give me every word even mildly related to $SOME_SENSE_ON_THE_TIP_OF_MY_TOUNGE" is a wildly satisfying and underrated experience.
As English is not my first language, I do run into problem where the line between fix my clumsy sentence and rewrite my thought is very thin. Same with writing "boring" technical explanation and more approachable content. I'm getting pushed back for both.
Any native English speaker who doesn’t live under a rock is very accustomed to reading and hearing English from non-native speakers and familiar with the common quirks and mistakes. English is quite forgiving as a language, we understand you. When in doubt, simplify it.
it's a couple mutually-conflicting languages in a trenchcoat; forgiveness and flexibility are perhaps its defining properties.
To the broader issue: "polish" (in any language) is only valuable insofar as it makes the ideas clearer, attests to innate qualities of the author and/or the investment of their time, or carries its own aesthetic value. As LLMs make (a certain kind of polish) cheap to produce, the value of the middle category attenuates to nothing.
this work is paramount. Without clear evidence of human filtering, a long, well formatted message/PR/doc is likely to reduce my estimate of the value/veracity/relevance of its content.
For years, even before LLMs, there have been trends of varied popularity to, for lack of a better word, regress - intentionally omitting capitalization, punctuation, or other important details which convey meaning. I rejected those, and likewise I reject the call to omit the emdash or otherwise alter my own manner of speaking - a manner cultivated through 30+ years of reading and writing English text.
If content is intellectually lacking, call that out, but I am absolutely sick of people calling out writing because they "think it's LLM-written". I'm sick of review tools giving false positives and calling students' work "AI written" because they used eloquent words instead of Up Goer Five[0] vocabulary.
I am just as afraid of a society where we all dumb ourselves down to not appear as machines as I am of one where machine-generated spam overtakes all human messaging.
[0] https://xkcd.com/1133/
That should leave you with media sources like nyt and your local library, which seems healthier to me. And maybe it might encourage a new type of forum to emerge where there is some decentralized vetting that you are a human, like verifying by inputting the random hash posted outside the local maker space.
I hope editorial departments everywhere are taking careful notes on the ars technica fiasco. Agree there's room for some kind of quick "verified human" checkmark. It would at least give readers the ability to quickly filter, and eliminate all the spurious "this sounds like vibeslop" accusations.
It does not resembles that. It is usually grammatically correct writing, but it is also pretty ineffective writing bad writing with good gramar.
Let's grab a few books off the shelf (literally).
Douglas Adams' The Hitchhiker's Guide to the Galaxy has four emdashes on the very first page:
> It is also the story of a book, a book called THGTTG - not an Earth book, never...
Isaac Asimov's classic The Last Question: three emdashes on the first page (as printed in The Complete Stories, Volume I)
> ...they knew what lay behind the cold, clicking, flashing face -- miles and miles of face -- of that giant computer.
Mark Z. Danielewski, House of Leaves: Three emdashes on page 1
> Much like its subject, The Navidson Record itself is also uneasily contained -- whether by category or lection.
Robert Caro, Master of the Senate: Five emdashes on page one
> Its drab tan damask walls...were unrelieved by even a single touch of color -- no painting, no mural -- or, seemingly, by any other ornament
Other pages 1s:
* Murakami - 1Q84: 1
* Murray/Cox - Apollo: 1
* Meadows - Thinking in Systems: 1
* Dostoyevsky - The Brothers Karamazov (Pevear/Volokhonsky translation): 4
* Caro - The Power Broker: 5
* Hofstadter - Godel, Escher, Bach - 3
Honestly, when I started this post I expected to have to dig deeper than page 1. The emdash is an important part of English-language literature and I reject the claim that we should ignore all writing that contains it.
Secondarily, I think there's a part of the discourse missing: the presence of a syntactic emdash in a sentence on the internet is not itself a strong signal of LLM-writing - but the presence of an actual emdash glyph (—) should raise some eyebrows, esp. in fora that aren't commonly authored in rich text editors (here, twitter, ...)
You're trading ability and competence for convenience.
should be:
>Although 80% of the content was my own writing, the fact that it was run through an LLM engine for grammar and vocabulary cross-checking meant that it failed the "probably written by AI" metric, and it was rejected.
That's ten corrections that are fixing straightforward typos and/or grammar and vocab mistakes in one sentence. Most are fairly objective, though I can understand different opinions on 2, 7, or maybe 10.Relying on AI for editing seems to have atrophied the author's writing if that is what he or she thinks is worth publishing on a blog like this. I would suggest practicing editing your own work and not even thinking about passing it through AI (especially when you were told not to use any AI!) to edit for a while. Given that English is not your first (or even second or third) language, I would also suggest having a native speaker with some demonstrable writing skill review your writing and give feedback on how to make it more idiomatic. For example, writing being "run through an LLM" rather than "run in an LLM" is a relatively subtle difference compared to the others, and it's very very common for preposition mistakes like this to show up when writing in another language than your first. I am still hopeless with French prepositions.
Just like hand made items are popular for their imperfections.
Personally, I'll just continue to use my own voice. I try to correct spelling and grammar mistakes, and proof-read my writing before posting.
It's not perfect, and my writing can at times be idiosyncratic, but it's my voice and it's all I've got left.
But don't be mistaken in thinking that those mistakes make it better, it just makes it mine.
eg: https://ids.si.edu/ids/deliveryService?id=SAAM-2011.6_1
from: https://americanart.si.edu/artwork/mandara-79001 https://www.museumofglass.org/ltlg
I want real humans giving real human opinions not ai giving their best opinion on what is the most "rewarding" weighted opinion
The structured thing you mention is the one that bugs me most. I genuinely think that most human writing would be improved by having more of the "signposts" that LLMs overuse. Headings, context-setting sentences, bullet points where appropriate, etc. I was doing "list of bullet points with boldfaced intro for each one" before the LLMs were. But because the LLMs are saturating their writing with it, we'll all learn to take it as a sign of glib superficiality and inauthenticity, and typical good human writing will start avoiding everything of that kind, and therefore get that little bit harder to read. Alas.
And I was just noticing that my home-built blog render pipeline produces dumb quotes and that was embarrassing to me. Needs to be fixed.
(Counterpoint, dumb quotes are 7-bit clean and paste nicely... Hmm.)
I wrote a plugin for my blog that converts all hyphens (surrounded by whitespace) into em-dashes.
https://blog.nawaz.org/posts/2025/Dec/a-proclamation-regardi...
https://en.wikipedia.org/wiki/Quotation_mark#Summary_table
(That Wikipedia table shows that too by the way.)
1. There was a lot of slop pre-AI. In fact I’d say the majority of published writing was bad, formulaic, and just written to manipulate your emotions. So in some sense, I don’t really think pre-AI slop had more value. It’s just cheaper to make now.
2. AI has prompted me to study more off-beat writers that followed the rules of language a little less frequently. This includes a lot of people from circa 1890-1970, when experimenting with form was really in vogue.
3. Which brings me to my third point, which is that no matter how much the AI actually knows about writing, the person prompting it is limited by their own education and knowledge of writers. You can’t say, “make me a post in the style of Burroughs” if you don’t know who Burroughs was, or what his writing style was. So in a sense there is an increased importance to being educated about writing itself. Without it you’re limited in your ability to use AIs to write stuff and in your awareness of how much your non-AI written work is influenced by AI writing.
AI always seems so verbose and wordy.
I get that the mainstream ones have been RLHF'd to death, but surely there must be others that are capable?
This is called Hemingway because he was apparently good at communicating efficiently which made him a popular author.
I never passed any AI writing as my own. I would feel utterly awful. Also, I love tweaking words until they sound perfect.
The number of people who just nonchalantly admit that AI writes their messages is honestly scaring me.
First of all, they will make substantive changes you didn’t intend. The meaning will get changed, errors will be introduced. Tone will be off, and as the author says, your voice will disappear. There is no single “correct” way to write something. And voice and tone are conveyed with grammatical and usage variation. Don’t give that up to a robotic average.
Secondly, you will never improve, or even maintain, your own writing skills if you don’t actively engage with the suggested changes. You also won’t fully realize half the purpose of writing, which is to understand the topic better yourself. Doing the work of editing your piece will help you understand the subject even better. If you just let the machine “fix” your errors, you’ll become a worse writer and less of an expert over time.
Ha. Well I guess you did, _this time_.
Can we not just ask an AI to correct our spelling mistakes and leave the rest alone?
you are missing the writing era, which is gone. whatever we have now will slowly congeal into cold grue that will get a name or names
the madness of bieng chastised for speakerphoning and disturbing people gulping the slop
what do we call that?
How is the author complaining about the quality of their own writing while admitting to not even bothering reading what they wrote, let alone editing it?
(Also, why would using a LLM based grammar checker trigger an AI writing detector? Did it end up rewriting substantial parts of the original submission?)
And that's, I think, a valid choice; you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry? That's for formal writing, and blog posts are not formal.
Not reading what you write smells more like laziness.
Same thing for spell checks, grammar checks, and even AI usage. If you use things lazily, the result will be lazy as well.
Instead of asking for an AI tool to write your thoughts in your place, you can write it yourself and ask it to criticize your text, instruct it to not rewrite anything, only give you an overall picture of text clarity, sentiment, etc.
But that of course would require more work. Asking ChatGPT to produce a text based on a lazily written, bullet point list of brainfarts is probably easier.
Plus, "lazy" would actually be just using AI to edit the writing.
LLM cant really do that. It can help you produce correct sentence where you struggle to create own, but it does not have capabilities to do what you suggest.
LLMs definitely can do this. The output tends to be overly positive though, claiming that any sort of rough draft you give them is "great, almost ready for publishing!". But the feedback you can get on clarity, narrative flow, weak spots... _is_ usually pretty good.
Now, following that feedback to the letter is going to end up with a diluted message and boring voice, so it's up to you to do with the feedback whatever you think best.
I used it many times for exactly this, with good results. It points out ambiguous contructs, parts that are dissonant from the tone I intend, etc.
I have no idea why you think that LLMs can't do that lol
There's nothing magical about a long text you write yourself vs a stream o reddit comments in a thread. It's all sentiment analysis on text. It can extract ambiguity, how ideas are connected in the context, categorize and summarize, etc.
You should try it and see it for yourself. Feed it some large text of a single author and ask it to do those things, see if the results are satisfactory.
> you can choose to use all the tools and make something gramatically and stylistically as close to perfect, but who would want to read something as dry
If it is dry, then it is not stylistically perfect. Per definition, dry writing is just an imperfect writing. Stylistically perfect writing does not have to be dry and usually is not dry.
What happens here is that people use "stylistically perfect" when they mean "followed a bad stylistic advice".
I do not mean this comment to be kick against AI. It is very good for some stuff, it is less good for other stuff. What annoys me is someone calling output superior while actually complaining about it being inferior.
Hey, maybe that llm needs to be used differently to achieve actually good writing results.
The problem is that it has a pretty high false positive rate. Maybe it thinks it's AI because there are absolutely no spelling mistakes. Or maybe you're French and you use latin-roots words in English that are considered "too smart" for the average writer.
And the problem is that people run those tools, see "80% chance to be written by AI", and instead of considering that 20% is high enough to consider you don't know, will assume it's definitely written by AI.
The article here is still full of AI slop, and so many people in the comments are defending the author. Blows my mind.
Grammarly has seriously started rewriting whole paragraphs recently, I have been having to reject more and more "prompts" where in the past I would accept them almost by default because they actually were Grammer checks.
Personally, I would recommend them to simple use any old editor with spellchecking enabled. That suffices for most writing where you just want to keep your own voice. To me, the red crinkly line just means that I should edit that word myself. In the rare case where I'm stumped on the spelling I'll look at the suggested edit of course, but never as a matter of course.
Computers, digital text, and digital information distribution/transportation have made writing and thoughts cheap. Arguably due to what we are surely all aware of, humans rarely value that which is cheap, whether monetarily or in effort and consequential qualities. What people seem reluctant or maybe unable to acknowledge is that predating the current AI Slop, was what could be called Human Slop, low quality, low effort, careless output that was cheap; regardless of whether AI slop now outperforms.
It is why you are justified in pointing out that even in the post complaining about AI Slop, the human has apparently abandoned what would have been common practice in just the recent past, using basic spellcheckers or simply reviewing what was written and also practicing with deliberation; the art and skill of writing, grammar, and sentence structure.
No one is perfect and that is also what makes anything human, somewhat inexplicable and random variation. However, it takes a certain refinement before unique human character becomes a positive quality and is not just humans being sloppy ... human slop.
https://www.literaturelust.com/post/what-writers-need-to-kno...
> Every NYT bestseller from 1960 to 2014 falls in the seventh-grade level spread, from 4th to 11th.
> ...
> Since 2000, only 2 bestsellers have scored higher than 9th-grade readability.
> ... ...
> The bestselling authors of our time are writing at the 4th-grade level.
> > “8 books tie for the lowest score,” a 4.4, just above 4th-grade level. Prolific, well-known authors with huge sales: James Patterson, Janet Evonvich, and Nora Roberts.”
> These three authors have written a combined total of 419 books.
What it is going to be is a 'Slop Decade' - a much better label if you insist on having one.
"Save during the summers and you'll make it through the winters".
Several subreddits became AI slop submission repositories and their human engagement dwindled. Some subreddits that were inundated with AI slop implemented policies that ban it, and it seems to work well.
Strict no slop policies work, and surprisingly, so do rules that require AI submissions to be tagged as AI. Forcing slop slingers to tag their slop does a good job at discouraging said slop, it turns out that admitting your slop is slop is embarrassing or something.
Or maybe there'll be the elite enjoying the world, while the rest of us have to work manual labor. But at least it'll be AI systems ensuring our compliance!