AI will not write for me.
An LLM can do a bang-up job at manipulating language in a useful way. That much is hard to dispute. But its Achilles' heel is its own nature. There’s an untraversable chasm between it and the perspective, experience, and accountability of a human.
At best, AI-generated content is the shallow, plastic average of those experiences. This makes it tremendously useful for quick, transactional information. But beyond that, it necessarily robs the reader of the real reason most content is worth reading.
I say all this as a daily, satisfied user of AI. It‘s invaluable for solving technical problems (including generating chunks of code), helping me think of a phrase different from the one I just used a few sentences prior, or for quick answers to small questions. I think incidental value like this is very helpful, and I'm glad I have it as a tool at my disposal.
Where I draw the line is letting it articulate an idea on my behalf. I won’t permit it to lobotomize the idiosyncratic vantage point only I can offer, or risk the expression of an idea with which I don't align.
Put more plainly, I've committed to never letting AI write for me. It may help generate a contrived code example or feed me an analogy to help articulate an idea. But it will not own the expression of the idea itself. You can count on that while reading everything on this site.
🫡