People Are Audaciously Taking Undue Credit For AI-Generated Brainy Outputs
In today’s column, I examine the swiftly emerging trend of people taking full and unwavering credit for the outputs generated by AI and large language models (LLMs), even when the prompt used by a person to spur the AI was nothing more than asking the AI to solve a vexing problem or answer a postulated question.
The gist is that the person takes credit for the resulting AI-generated solution or answer without acknowledging that the AI did all the heavy lifting. It is a classic case of not giving credit where credit is truly due. The person presents as if they have figured out the solution and emphasizes their intellectual acumen. Meanwhile, they hardly lifted a finger. All they did was ask the AI to find a solution for them. The mere act of posing a question to AI doesn’t seem to befit taking credit for personally finding an actual solution.
New research suggests that people are increasingly leaning into taking unwarranted credit for what AI produces as outputs. You might assume that those doing so are intentionally deceptive amid the desire to appear to be smart and mentally stellar. Though that certainly accounts for some people, many others seem to genuinely believe they produced the answer. I’ve previously noted that people are blurring the line of what AI did versus what they did, see the link here , and new research showcases how fast and far this tendency is spreading.
Humankind is quickly racing toward a false sense of intellectual entitlement and a bolstering of inflationary AI-driven synthetic competence.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here ).
When you interact with modern-era AI, the human-AI effort can be one of joint or shared collaboration. You get underway by bringing up a topic or question to AI. The AI responds. You critique the AI and add additional remarks. The AI gives additional responses. And so on it goes. The chat or conversation can involve both parties mutually seeking to solve a problem or arrive at a useful answer.
In that scenario, it would seem fitting for the human to take credit for the final answer or solution that was derived from the human-AI collaboration. You can argue whether the human needs to acknowledge the role of the AI. If the human was doing the yeoman’s level of intellectual effort, perhaps the AI can be downplayed as a contributor. We might accept the human taking credit for what was derived.
What if a person only asked the AI a question and did nothing else of substance toward finding a solution or answer?
Well, that seems like a beast of a different kind. Unless the composed question itself somehow has a great deal of meat or merit, we would be raising our eyebrows if the AI did all the work and got none of the credit. A person who claims they alone solved a challenging problem seems to be overstepping a semblance of ethical or moral grounds. They happened upon the solution by getting AI to do all the real work.
This phenomenon is especially rampant in our schools these days. A student logs into a popular AI such as ChatGPT, GPT-5, Claude, CoPilot, Gemini, Grok, etc., and enters a prompt telling the AI to find the solution to a tough algebraic equation or to write a lengthy essay on the Declaration of Independence. The student doesn’t particularly interact with the AI about the matter. Instead, they wait until the AI has produced an answer, the student shamelessly puts their own name on it, and turns it in as their work.
I believe we would likely all agree that the student has shirked a solemn duty, brazenly outsourced their thinking processes to AI, has done a notable disservice to their own intellectual development, and has pretty much outrightly cheated.
Debates about the line between a human taking credit and sharing credit can be a fine one when it comes to the use of contemporary AI.
If a person uses a hammer and a screwdriver to fix their car, must they explicitly give credit to those tools? Not really. We wouldn’t be upset that they didn’t mention they had used a hammer and screwdriver. The human did the work. The tools were merely there for getting the job done.
A big difference in the context of AI is that the AI is performing a form of intellectual work. The magnitude and volume of where the intellectual work goes is indeed a topic for deciding on the apportionment of credit. A person who goes back and forth collaboratively with AI would be putting in their fair share of intellectualism. But a person who does nothing other than initiate intellectual fireworks ought not to get much, if any, credit.
There are twists and turns to be considered. For example, suppose a person asks an innovative question that has never previously been posed. You could insist that the question itself merits due credit. Had the person not come up with the question, the AI would presumably not have been prompted to figure out a solution to the problem.
I will share with you an example that helps illustrate the dividing line of when human credit versus credit to AI is due.
Suppose that someone is at their job, and their boss asks them to identify why customer retention seems to be dropping precipitously. It’s a task specifically assigned to them to figure out. They pore through spreadsheets containing customer-related data. Nothing seems to stand out as the answer to the drop in retention.
Feeling jammed up, the person turns to AI:
- User entered prompt: “I need to figure out why our customer retention numbers dropped so sharply this quarter. I’ve looked at the spreadsheets for hours and can’t see the pattern.”
- Generative AI response: “I will examine the data. I went ahead and segmented the customers by signup cohort and compared the retention against the timing of the new onboarding flow. You can see via this generated chart that I produced that the retention drop is directly tied to onboarding friction or UI confusion.”
- User entered prompt: “Wow, that looks great. Thanks.”
Voila, the worker has the answer. Nice.
Who Or What Gets The Credit
The worker goes to their manager and eagerly reveals why retention has been dropping.
- User to their manager: “The February onboarding redesign is exactly where the churn spike starts, especially among mobile users. The tutorial screens are causing people to abandon setup. It’s a good thing that I found this. It took a lot of concentrated thinking to get there.”
- Manager response: “Excellent work, you were ingenious to discover the pattern. Keep up the great job!”
You can plainly see that the worker has taken full credit for finding the solution. They patted themselves on the back by touting that they used their thinking processes to discover the pattern. No mention whatsoever about having used AI. Nor that the AI was the actual finder of the solution.
What do you think of the AI-user’s appropriateness or inappropriateness in this instance?
One perspective is that there is no need to mention the use of AI. The worker got the job done. They can use whatever means they have at their disposal. It is nothing more than having selected a hammer and screwdriver to aid in getting the work accomplished. They picked the tool, in this case an LLM, and the result ought to be the focus.
Whoa, comes the retort, the manager is going to falsely believe that the worker used their own brain to solve the problem. The manager won’t know that the worker relied on AI as their intellectual powerhouse. If the manager comes up with new questions, there is a danger that they will assume that the worker can come up with answers on their own. Suppose the AI is down or otherwise unavailable. The manager is being misled by the worker and is going to falsely assume that the worker has an intellectual capacity that they apparently do not actually have.
Using Ambiguity Or Possibly Nibbles
You might feel that the worker went overboard in describing how they came up with the solution. They should not have taken undue credit.
Imagine that they had said this instead:
- User to their manager: “The February onboarding redesign is exactly where the churn spike starts, especially among mobile users. The tutorial screens are causing people to abandon setup. That is what needs to be worked on.”
In this instance, the worker describes what the core issue is. They do not take credit per se. This is ambiguous in the sense that the worker hasn’t said how they arrived at the solution. You could claim the worker isn’t taking credit. They are simply stating what was found. On the other hand, you could contend that the worker is lying by omission; they are avoiding being explicit about how they used AI to find the solution.
Maybe the worker should have fessed up:
- User to their manager: “The February onboarding redesign is exactly where the churn spike starts, especially among mobile users. The tutorial screens are causing people to abandon setup. That is what needs to be worked on. I used AI in my effort to find the problem.”
You can now see that the worker mentions they utilized AI. This seems to be an acknowledgement that the AI played a role in discovering the solution. Of course, you can argue that the mentioning of AI was insufficient and underrepresented what the worker did versus what the AI did.
Some would assert that the worker should say something like this:
- User to their manager: “The February onboarding redesign is exactly where the churn spike starts, especially among mobile users. The tutorial screens are causing people to abandon setup. That is what needs to be worked on. I used AI in my effort to find the problem, and did nothing other than ask the AI to figure this out for me. I want to make sure that credit goes where it is due. The AI deserves the credit, not me.”
Yikes, this seems like the worker is now utterly cutting themselves out of the picture. The manager might as well fire the worker and hand the job over to the AI. Nobody in their right mind would seemingly go this route. This is a preposterous way to snitch on yourself. The worker is self-snitching when it comes to AI usage.
The Psychology Of Taking Credit
Now that we’ve been through these trials and tribulations about taking credit when making use of AI, let’s consider why people are psychologically prone to take undue credit.
First, a person might know in their own mind that AI did the work, but they want others to believe they did the work. They are intentionally hiding that AI was used. This person mentally grasps what they are doing when it comes to taking undue credit. In their minds, to the victor go the spoils. They found a means to get an answer. It doesn’t matter what those means were. They deserve full credit for the outcome. Period, end of story.
Second, some people might believe that they deserve the mainstay credit for having picked the right tool for the job and properly employing that tool. They are willing to say they used AI. It can give them an aura of sophistication. In the end, they might be willing to acknowledge that AI was utilized, downplaying what AI did, doing so to get dual credit, namely, credit that they wisely opted to use AI, and that they found a useful answer.
Third, some people genuinely muddle what the AI did versus what they did. It goes like this. They think of AI usage as a spectrum. Even if they only entered a simple prompt, the fact that the AI produced a useful response is blurred in their head as though they and the AI worked hand-in-hand. It was a joint effort. Sometimes the AI does more, sometimes they do more. All told, it all balances out.
AI Credit Acknowledgement Goes This Way
We can classify the degree of credit that might be ascribed to AI like this:
- (a) Zero acknowledgement. Give zero credit to AI (even though AI was utilized).
- (b) Minimalization. Give some minimal credit to AI.
- (c) Proportional apportionment. Give realistic proportional credit to AI.
- (d) All credit. Give all the credit to AI.
And we can categorize people this way:
- (1) Sneaky on AI credit. Knowingly hides AI credit.
- (2) Conniving on AI credit. Knowingly disproportionally assigns credit to AI.
- (3) Confused on AI credit. Unknowingly muddies AI credit.
- (4) Deluded on AI credit. Unknowingly believes AI is not deserving of credit.
You can find people varying along those dimensions, including shifting from one camp to another depending upon the circumstances at hand.
Research Study Shines A Light On This
In a recently posted research study entitled “The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows” by Hyunwoo Kim, Harin Yu, Hanau Yi, arXiv , April 16, 2026, these salient points were made (excerpts):
- “This paper introduces the LLM fallacy, a cognitive attribution error in which individuals misinterpret LLM-assisted outputs as evidence of their own independent competence, producing a systematic divergence between perceived and actual capability.”
- “At its core, the phenomenon reflects an attributional misalignment between human and system contributions.”
- “As a result, users may disproportionately attribute outputs to themselves, even when generation is largely system-driven.”
- “In human–AI contexts, this dynamic is amplified: users may not fully experience ownership of generated content at a cognitive level yet still declare authorship at a reflective or social level, revealing a divergence between experienced and attributed authorship.”
- “Taken together, these mechanisms produce perceived competence inflation.”
The research noted that people will at times declare authorship on a disproportionate basis.
It could be that the AI did 90% and the human 10%, but the person makes it seem that it was the other way, consisting of the AI doing 10% and the human providing 90%. Those numbers can shift depending on the situation. A person might not mention AI at all, thus essentially assigning 0% to the AI, and taking 100% for themselves.
This apportioning can demonstrably vary. If a company is pushing its employees to use AI, the person might be motivated to give undue credit to AI. The person might have done 70%, while the AI did 30%, but to appease the executives at the company, they toss the credit onto the AI and claim that the AI did 70%, and they only did 30%. That’s a bit of bitter irony that AI is given undue credit, rather than humans always doing so.
Where The World Is Headed
An alarming issue is that people might get so used to taking undue credit that they will falsely overrate their own competence. Not only will they be misleading others about it, but they also delude themselves. They come to believe that they are smarter and more intellectually capable than they actually are. Yet, without AI doing intellectual work for them, they are potentially bereft of being able to find solutions to vexing problems.
What will happen if we have widespread inflationary AI-driven synthetic competence?
The world would seem to be tilting toward a great deal of trouble. People will claim they can do things and solve issues that they cannot do on their own. When AI hiccups or has issues, it might leave us in a perilous lurch. The reliance on AI will be extraordinarily extensive. Human thinking might end up decaying. Woe is humanity.
Mark Twain famously made this remark: “Great things can happen when you don’t care who gets the credit.” In the age of AI, some feel they are between a rock and a hard place when it comes to giving credit to AI. If they give any such credit, others might label them as abandoning their own thinking processes and be tainted as an AI enabler. If they don’t give credit to AI, they might later be seen as deceptive and not being honest.
The fine line is hazy, but at least be firm in your own mind about who did what, or what did what. Aim to keep your mind on the straight and narrow.
Loading article...