Modern communication at the moment often exhibits properties that are the exact opposite of a zip-file. A zip file takes what you have, compresses it down to a smaller amount of bits to send it effectively over the wire, and the reader re-inflates it back up to its original form without loss of information.
Now async written communication - and sometimes even synchronous verbal communication - is often mediated by probabilistic inflation and summarisation. There is, at the moment, a tendency for people to inflate their writing with an LLM, then the reader is presumably expected to paste it into an LLM again to compress it back to the few bullets it was generated from.
- Bullet points.
- LLM inflation
- Transmission
- LLM summarization
- Bullet points
Instead of a perfect compression, we have a lossy inflation. All we've managed to achieve in this process is at best taking more time and processing resources, and at worst, we've diluted the signal with a probabilistic tool, twice, potentially losing information in the process; exchanging probabilistic echoes rather than actually communicating earnestly and authentically.
When I interact with my friends, family, coworkers, strangers on the internet; I do so with the hope of earnestly reaching a human being. I am transmitting something from my Mind's Eye into theirs, and I try to do that in a way that is clear, concise and respectful of their time. In return, I expect that they do the same - try to expend effort to make their internal model of the world understandable to mine.
That assumption of effort is the key problem. LLMs allow an inversion of effort that has undermined the previous communications status quo; reading can now take orders of magnitude longer than writing took, and even investing the time to identify whether one should spend the time parsing the signal from the noise, is unto itself extra cognitive overhead.
I typically prefer to get people's original prompt or bullets instead of the output they genned from said bullets. Sometimes less is more. The genned output hides a lot of the signal in noise. This cultural erosion of trust that these tools incur is under-discussed, compared to the technicality of how they work and how they can create artifacts, and I think that's actually the more important thing to reflect on.
In the same way that LLMs decohere when they consume only LLM output, I think a mirror-effect of that happens to interpersonal communications when LLM usage has diluted the signal, with compounding effects over time. Under-considered use of LLMs can drastically undermine trust and signal quality, and ultimately this can lead to a culture of mistrust and inefficiency.