When an author uses AI for “polishing” a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters – the precise points where unique insights and “blood” reside – and systematically replaces them with the most probable, generic token sequences. What began as a jagged, precise Romanesque structure of stone is eroded into a polished, Baroque plastic shell: it looks “clean” to the casual eye, but its structural integrity – its “ciccia” – has been ablated to favor a hollow, frictionless aesthetic.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    Thanks for the comprehensive write-up! I guess that makes a lot of sense. I mean if we’re just talking about regular AI assistant output, Sure. I see that as well. I mean I also have an additional issue with how these things are tuned… I never liked the tone, especially ChatGPT does. It is way to repetitive but in an annoying generic tone, mixed with know-it-all vibes. But it doesn’t know it all. And then it talks to me like I’m 4 years old, and it’s my helpful sycophant. Outputs 3 pages of text to any simple task/question and there’s about no substance to all the many sentences. Unless it decides to lecture me on ethics… Saying my email is phrased way too harshly. And then it goes ahead to replace my witty sarcasm with some bland phraseology like we’re doing some customer support hell… I see no reason to use it as a tool to “refine” my emails. Though, I think that’s mainly due to the role-playing as a “helpful assistant” which people seem to prefer?! Not sure if that’s necessarily in the maths. But it’s enough to deter me… Well, that, and the fact that it removes key information in some ill-suited attempt to “summarize” or brush over important paragraphs.