Analysis of Human Acceptance of Artificial Intelligence
Original
-
ZenTao Content
-
2025-09-19 17:00:00
-
90
Today, artificial intelligence (AI) is no longer just a concept from science fiction movies—it has permeated almost every aspect of our lives. From voice assistants in smartphones to recommendation systems used in online shopping, and even AI tools that assist in medical diagnosis, AI is everywhere. But have you ever wondered why some people are comfortable using AI while others remain cautious? A recent study published in the Journal of Marketing 2025 analyzed data from over 119,000 individuals and uncovered the factors behind human acceptance of AI.
If you still think of AI as merely "helping with data crunching and repetitive tasks," that impression is somewhat outdated. Early AI indeed functioned like a "mechanical tool". For example, assisting companies with data entry and report generation by passively executing human commands. But today’s AI has "evolved" into autonomous "intelligent agents." Take the familiar ChatGPT, for instance: it doesn’t just answer questions simply but can understand your needs through conversation and even help you draft proposals or revise articles. Self-driving cars are even more advanced, capable of perceiving road conditions and proactively avoiding hazards. These "agentic AIs" no longer merely "do what you tell them to"; instead, they can take initiative and adapt to environmental changes.
This shift has become a "focal point of competition" in the tech world. Open AI has launched the multi-tasking "Operator," Google has built the intelligent agent interaction platform "Agentspace," and NVIDIA also provides technical support for agentic AI. Major companies are now competing in "agent capabilities," as this has become the core of AI competitiveness.
In the past, theories such as the "Technology Acceptance Model (TAM)" were commonly used to analyze whether people would adopt a new technology. These models primarily emphasized whether the technology was "easy to use" and "useful." However, such frameworks appear somewhat "outdated" when applied to today’s AI. First, the perspective is too narrow. Traditional theories regarded AI merely as a "tool to improve efficiency," focusing on whether it could save time or enhance productivity, while overlooking its current "social attributes" and "autonomous capabilities." For example, when AI interacts with you like a friend, your acceptance of it is not based solely on "whether it can accomplish tasks for me." Second, they are detached from real-world contexts. Earlier theories were developed for "passive technologies" such as computers and smartphones and cannot adequately explain AI’s role in high-stakes scenarios. For instance, in healthcare, when AI is used for diagnosis, doctors and patients are concerned not only with "whether the AI is accurate" but also with "who will be responsible if the AI makes an error." In courtrooms, where AI assists in sentencing, people worry about "whether the AI might be biased." Third, the variables are too vague. Previous research did not clearly distinguish between AI characteristics that developers can adjust—such as interface design and decision transparency—and those beyond control, such as some users’ inherent distrust of new technology. As a result, recommendations based on these models are often difficult to implement.
These issues indicate that in the face of "agentic AI," a more comprehensive analytical framework is needed. AI acceptance is influenced by three main dimensions. The first consists of the "designable features" of AI itself: stronger capabilities (e.g., high accuracy and efficiency) lead to higher acceptance; "advisory" and "general-purpose" AIs are more welcome than "executive" or "specialized" ones; "input transparency" (e.g., explaining data sources) enhances trust, while "process transparency" (explaining algorithmic logic) may reduce acceptance due to complexity; overly anthropomorphic designs tend to cause discomfort. The second dimension is the context of use: high-risk fields such as healthcare and finance see lower user acceptance due to ethical and privacy concerns, while consumer settings like e-commerce recommendations are more tolerant. The third dimension involves individual user factors: age and gender have minor effects, whereas technical experience and psychological traits (e.g., risk preference and need for control) play a more critical role—experienced users embrace AI more easily, while those who prefer control are more resistant.
Humans do exhibit resistance to AI, but it is "relatively weak and gradually decreasing." Initially, many people had concerns such as "AI taking jobs," "no one being accountable for AI errors," or "AI leaking private information." However, as AI becomes more widespread, people gradually realize that "AI is here to assist, not replace humans" and that "most AI systems are supervised by humans and do not make arbitrary decisions," leading to a gradual reduction in resistance. It is important to note, however, that "AI acceptance remains generally lower than trust in humans." In medical settings, for example, people prefer to trust doctors’ judgments rather than relying entirely on AI. In advisory contexts, individuals are more inclined to communicate with human consultants rather than interact solely with AI.
This study offers practical recommendations for enhancing AI acceptance. For businesses, the priority should be to improve AI’s core capabilities—such as accuracy and efficiency—and strengthen user awareness through communication, an approach that proves more effective than anthropomorphic design. Meanwhile, AI systems should be designed with specific scenarios in mind: consumer applications should emphasize transparency and user control (e.g., clearly explaining recommendation logic and providing opt-out options), while professional scenarios should focus on versatility and options for human intervention (e.g., allowing modification of AI-generated outputs). Policymakers need to balance innovation and safety by mandating human oversight and data source disclosure in high-risk fields such as healthcare and justice to ensure decision-making reliability and privacy protection. Furthermore, public sector demonstrations, such as AI-assisted government services and traffic management systems, can help the public experience the benefits of AI directly, thereby reducing resistance.
With advancements in generative AI and artificial general intelligence (AGI), the “agentic nature” of AI will continue to strengthen. Future AI systems are likely to resemble “partners” that are capable of understanding human emotions and assisting in complex decision-making. However, this also raises new challenges, such as how to define ethical accountability for AI and how to design collaborative models between AI and humans. Through ongoing research and thoughtful regulation, AI can better serve humanity and achieve “harmonious coexistence.” Overall, human acceptance of AI is not determined by any single factor, but rather by the combined influence of AI’s inherent capabilities, contexts of use, and user characteristics. As AI technology continues to evolve and public understanding deepens, acceptance of AI will keep growing, and AI will truly become a “helpful assistant” in both daily life and work.
Support
- Book a Demo
- Tech Forum
- GitHub
- SourceForge
About Us
- Company
- Privacy Policy
- Term of Use
- Blogs
- Partners
Contact Us
- Leave a Message
- Email Us: [email protected]