AI and Authorship
Be transparent about your use of AI
It's not uncommon in my neck of the woods to co-author a paper with one or more people and share credit for it. Sometimes there's a lead author, or perhaps someone is mentioned as contributing certain parts, and often authors will thank reviewers of earlier drafts for ideas that get incorporated into the final publication. All well and good.
But these days there's sometimes a worry that working with AI on an idea or publication somehow delegitimizes the human contribution, or that we're cheating if we consult with Claude or ChatGPT or your favorite bot. This is not the worry that AI might get it wrong: give us fabricated references, bad data and diagrams, illogical inferences, or otherwise defective input. After all, we know, or should know, that any source needs to be checked for accuracy and logical consistency. In my necessarily limited experience, these bots get a lot right and often come back with interesting and insightful responses to my input, adjusting for their obvious sycophantic tendencies ( see links below).
No, concerns about accuracy and sycophancy aside, the worry seems to be that in conceding a role for AI, we're conceding at least a share of authorship to an insentient machine. But if the machine gets it right and has something to contribute, why should that bother us? Isn't the important part what gets said, not who or what said it?
If only such were the case. We are credit and status seeking animals, so sharing authorship, especially with an insentient machine intelligence, necessarily reduces our status as sole originators and content producers. So there's an incentive to minimize or even deny its influence on and contributions to our work. But since we routinely share credit with human collaborators, and cite their publications to back up our claims, our prejudice against sharing credit with machines is purely parochial, an anthropocentric bias. What matters is truth, not its source (right? right??).
Here's my “simple” solution to allocating credit and thus documenting your own contribution to a work: provide a paper (digital) trail of every AI interaction that's been involved in how you developed your ideas and publications, just as you would acknowledge the contribution of a human collaborator or cite your human peers. Let people see how you prompted the AI, what its response was, how you responded in turn. Provide all the exchanges (“chat logs”) in full as addenda or linked references, and as far as I know most bots can provide a link to them, as I will below on exchanges not related to this post (as it happens, I didn’t use AI for this).
AIs are increasingly smart, but you started the exchange, and you - a seeker after truth - should be happy when they come back with an insight or analogy that you may not have encountered and that contributes to your project. You will of course check their work, as your work should (ideally) be checked by reviewers. And if those reviewers be bots, don't be offended. What matters is their expertise and accuracy, not the material substrate of their intelligence.
You’ve no doubt used LLMs and are familiar with their capacities, but here are two chat logs that demonstrate the conceptual prowess of “mere” token generators:
https://claude.ai/share/ba46b863-457d-4ee5-b958-654861d52c08
https://claude.ai/share/9e127c95-fd86-43cd-bf0d-ca96bc6cb8eb
And do check out philosopher Andy Clark’s interesting brief in favor of augmenting ourselves with AI.

I think some form of acknowledgement or specification of the extent of AI involvement should become the norm. One option is to use no AI when writing, but this would be ignoring the potential utility of a useful tool.
I refuse to use AI to write, but I have found it to be a useful beta-reader, mostly as a test of how much of the text successfully conveys its message to a (relatively stupid) machine.
In my professional life, I use AI openly and shamelessly, because that writing is usually functional and tedious rather than creative. It saves me hours every week.