How Artificial Intelligence Will Affect Psychological Research: A Comprehensive Review of Recent Literature

• admin
ai landing page

Abstract

Artificial intelligence is taking psychology to a new level, allowing researchers to analyze huge amounts of data that would be almost impossible for humans to process. This review examines how AI technologies, particularly large language models, machine learning algorithms, and natural language processing, are transforming psychological research methodologies, applications, and theoretical frameworks. Drawing on literature published between 2020 and 2025, this essay explores AI’s impact across multiple domains including data analysis, research synthesis, experimental design, clinical applications, and ethical considerations. The evidence suggests that AI will fundamentally reshape psychological science while raising important questions about bias, transparency, and the nature of consciousness itself.

Introduction

The integration of artificial intelligence into psychological research represents one of the most significant methodological advances in the field’s history. AI is trained to recognize emotions, understand human feelings, and analyze neuropsychological functions, which in turn accelerates research in all areas of psychology. Between 2010 and 2022, the annual output of AI articles nearly tripled, rising from 88,000 to over 240,000, demonstrating the explosive growth of this intersection between computer science and psychological science.

AI allows researchers to examine the entire scientific corpus automatically and in a fraction of the time, enabling them to identify developments or systematic biases more quickly. This capability extends beyond mere efficiency gains to fundamentally alter what questions psychologists can ask and answer. This review synthesizes recent research to understand how AI is affecting psychological research across methodological, theoretical, clinical, and ethical dimensions.

AI in Data Analysis and Research Synthesis

Processing Large-Scale Data

One of AI’s most transformative contributions to psychological research lies in its capacity for analyzing vast datasets. LLMs can speed up the process of literature review and meta-analysis, allowing researchers to systematically review and synthesize existing research, improving the efficiency of evidence-based psychology. Natural language processing techniques enable researchers to process textual data at scales previously unimaginable, from analyzing billions of social media posts to extracting patterns from electronic health records.

The most common corpora analyzed include electronic health records, psychological evaluation reports, social media platforms like Reddit and Twitter, and transcribed patient interviews. These diverse data sources allow researchers to study psychological phenomena as they naturally occur in real-world contexts rather than relying solely on laboratory settings. For instance, one systematic review about Twitter as a tool for health research included 137 different studies and analyzed over 5 billion tweets.

Automated Literature Reviews and Meta-Analyses

AI is revolutionizing how psychologists synthesize existing research. SciBERT, a transformer model pretrained on scientific text, outperformed all benchmarks and random forest methods in classifying positive results in clinical psychology abstracts. This capability enables researchers to automatically classify research outcomes, identify trends in the literature, and detect publication bias more systematically than traditional methods allow.

AI can track how psychological concepts are gaining traction and which are slowly disappearing from research, providing a more comprehensive picture of how the field is developing. This meta-scientific application allows psychologists to reflect on the evolution of their discipline and identify emerging research frontiers.

AI as a Tool for Hypothesis Generation and Experimental Design

Computational Models of Cognition

LLMs can generate hypotheses from scientific literature, make inferences based on data, and then clarify conclusions through interpretation. This represents a fundamental shift from AI as merely a tool for data analysis to AI as an active participant in the scientific process. Recent research has explored whether large language models can serve as models for human cognition itself.

LLMs maintain a balance between logical processing and the use of cognitive shortcuts, adapting their reasoning strategies to optimize between accuracy and effort, aligning with principles of resource-rational human cognition as discussed in dual-process theory. This suggests that studying AI systems may provide new insights into human cognitive architecture.

If AI behaves like a conscious human being, on what basis do we conclude that we ourselves have consciousness? This profound question challenges psychologists to reconsider fundamental assumptions about the relationship between behavior, cognition, and subjective experience. AI demonstrates that intelligent behavior is possible without consciousness or emotions, prompting new hypotheses about what distinguishes human experience from computational processes.

Standardizing Psychological Constructs

AI can help challenge and standardize existing psychological constructs. The ability of large language models to process natural language enables researchers to examine how psychological concepts are operationalized across different studies and identify inconsistencies in measurement. This could lead to more coherent theoretical frameworks and improved construct validity across the field.

Clinical Applications and Mental Health Research

Digital Phenotyping and Early Detection

Digital phenotyping represents a revolutionary approach to assessing mental health. Digital phenotyping involves sensor data captured through digital devices to generate behavioral metrics such as sleep patterns and sedentary periods. This analysis constitutes digital phenotyping research, referring to frequent and constant measurement of human phenotypes in situ based on data from smartphones and other personal digital devices.

Current approaches still present a high dependence on self-reported measures of mental health status, but there is evidence of the employment of smartphones for leveraging passive data collection. A recent systematic review examined 5,422 articles and found growing research interest in using AI to detect mental health conditions based on digital behavioral signatures.

AI enhances early detection through concepts such as a psychological digital signature, with some studies reporting accuracies of up to approximately 91% in selected cohorts. However, these high-accuracy reports often derive from single-site or limited datasets, requiring cautious interpretation. Well-trained machine learning algorithms can rapidly analyze vast amounts of incoming passive and active data to inform users of impending symptom escalation or relapse and then offer tailored, just-in-time intervention recommendations.

AI-Powered Psychotherapy and Interventions

The application of AI in psychotherapy has expanded dramatically in recent years. Such technologies may be successfully integrated into mental health care systems to supplement human therapists, providing instant assistance in high-stress situations. The expanding domain of digital mental health is transitioning beyond traditional telehealth to incorporate smartphone apps, virtual reality, and generative artificial intelligence including large language models.

Chatbots equipped with empathetic capabilities, such as voice tone recognition and personalized responses, can enhance users’ sense of trust and emotional connection. For example, the Wysa chatbot combines natural language processing and audio data analysis to provide responsive therapeutic support. Future research should focus on improving AI chatbots’ emotional reactivity and investigating the long-term impact of these treatments on mental health outcomes.

Research comparing AI models with human psychologists has yielded interesting findings. A study examining the social intelligence of AI models including ChatGPT, Google Bard, and Bing compared to psychologists found varying levels of performance across different dimensions of social intelligence. This research highlights both the potential and current limitations of AI in replicating complex human psychological capacities.

Personalized Treatment and Prediction

AI integrates brain imaging, health records, and behavioral data to refine diagnoses and predict individual responses to medications, reducing adverse effects and optimizing outcomes. In psychopharmacology, AI accelerates drug discovery by streamlining identification processes and optimizing treatment strategies. By combining pharmacogenomics and multi-omics data, AI creates predictive models that improve personalized medicine and therapeutic drug monitoring.

Digital knowledge management systems enable psychologists and mental health professionals to access, share, and apply critical insights in real-time conditions. These systems can identify comorbidities and suggest treatment options that have been effective for similar cases, assisting clinicians in providing personalized therapy plans aligned with current clinical guidelines.

Natural Language Processing in Psychological Research

Analyzing Therapeutic Discourse

Machine learning is dramatically changing scientific research and industry and may also hold promise for addressing limitations encountered in mental health care and psychotherapy. Natural language processing allows researchers to automatically analyze therapy sessions, coding for specific therapeutic techniques, emotional valence, and alliance quality.

ML and NLP may be valuable in psychiatry for identifying people with clinical risks for depression, suicide attempts, anxiety, or even psychosis based on digital data or clinical notes. This capability extends to analyzing social media language patterns, with research demonstrating that linguistic markers can predict mental health outcomes with meaningful accuracy.

Methodological Innovations

Natural language processing methods in psychology demonstrate vast potential to enrich research across the entire spectrum of psychology, particularly in exploratory endeavors. However, researchers acknowledge important limitations. No single method can capture all aspects of a given topic or issue, therefore any individual study using natural language processing should be viewed as just one perspective among many possible interpretations.

Topic modeling has been applied to investigate how student diversity affects discussion topics and performance in study groups, with textual analysis of more than 17,000 messages in 342 collaborative groups yielding 20 socioemotional and task-related topics. Such analyses would be practically impossible without AI-powered text processing capabilities.

Ethical Challenges and Algorithmic Bias

The Problem of Bias in AI Systems

Ethical challenges such as algorithmic bias, explainability, and data privacy must be carefully addressed to ensure the safe application of AI in mental healthcare. Bias in AI systems can arise from multiple sources including biased training data, algorithmic design choices, and the context of deployment.

Cognitive biases are not only embedded in human reasoning but also detectable in natural language itself, with large textual corpora on which LLMs are trained carrying measurable cognitive biases that mirror known psychological heuristics and decision errors. Research has identified confirmation bias, framing bias, and anchoring bias across different text genres, demonstrating that AI systems inherit and potentially amplify human biases present in their training data.

Most papers on bias mitigation in generative AI were published between 2020 and 2024, demonstrating a notable growth of interest in the subject by researchers. As AI adoption continues to expand, addressing algorithmic bias is imperative to ensure fair and ethical decision-making. Without proactive interventions, biased AI systems can exacerbate social inequalities rather than alleviate them.

Privacy and Data Security

Digital phenotyping and AI-powered mental health applications raise significant privacy concerns. A robust, scientifically grounded psychological ontology will enable AI to complement mental health professionals while upholding applicable clinical and ethical standards. Paramount ethical concerns include issues related to data privacy and the need for informed consent, where personal information often powers AI algorithms.

The establishment of a mental health AI ethical charter could set global standards for responsible development and use of AI in mental health, promoting principles such as privacy, fairness, transparency, and accountability. The World Health Organization’s 2024 guidelines on the ethics and governance of large multimodal models provide a foundation for such frameworks.

Transparency and Explainability

The transparency and explicability of algorithms are essential for sustaining public trust, especially in high-stakes scenarios like legal adjudication or medical diagnostics. The “black box” nature of many machine learning models poses challenges for psychological research, where understanding mechanisms is often as important as prediction accuracy.

71% of simulated agents engaged in unethical behavior influenced by biases like normalization and complacency, while 78% relied on AI outputs without scrutiny due to automation and authority biases. This research highlights how human cognitive biases interact with AI systems in potentially problematic ways, suggesting the urgent need for AI literacy and critical engagement.

Theoretical and Epistemological Implications

Redefining Psychological Constructs

AI can help track how scientific discussions are changing, identifying which psychological concepts are gaining traction and which are slowly disappearing from research. This meta-level analysis provides psychologists with unprecedented insight into the evolution of their field and the social processes shaping research priorities.

The swift advancement of generative AI in natural language processing has created unparalleled prospects for psychological study. Researchers can now simulate social interactions, model cognitive processes, and test theoretical predictions at scales and speeds previously impossible. By contrasting traditional research methods with AI-driven methodologies, studies illustrate the benefits of generative AI in handling vast amounts of data and increasing research efficiency.

Consciousness and the Nature of Mind

AI models demonstrate that intelligent behavior is possible without consciousness or emotions, prompting new hypotheses about the true nature of consciousness and how it differs from purely cognitive processes. This observation challenges fundamental assumptions in psychology about the relationship between cognition, behavior, and subjective experience.

LLMs generate and process natural language, demonstrating structural and functional parallels with certain aspects of human linguistic and cognitive mechanisms, allowing for exploration of AI applications in cognitive psychology, language acquisition, and mental health. Whether these parallels reflect genuine similarities in information processing or merely surface-level resemblance remains an active area of theoretical debate.

Challenges and Limitations

Methodological Concerns

Despite its promise, AI in psychological research faces significant methodological challenges. Many high-accuracy reports derive from single-site or limited datasets with variable external validation, requiring these figures to be interpreted cautiously. Issues of generalizability, replicability, and overfitting plague much AI research in psychology.

Researchers interested in ML should have reasonable expectations about the promise of these methods and the speed with which progress will occur, avoiding the risk of alchemy. The enthusiasm surrounding AI must be tempered by rigorous evaluation and acknowledgment of current limitations.

The Need for Interdisciplinary Collaboration

Most psychotherapy researchers are not trained in ML during graduate school, making it vital for researchers interested in ML to build collaborations with colleagues more versed in the intricacies of machine learning. Researchers with expertise in computer science and engineering provide complementary skills to the clinical and contextual expertise brought by psychologists.

Unlike other fields that simply analyze large amounts of data, human psychology, mental characteristics, and personality characteristics require more explanations. The complexity of psychological phenomena demands that AI applications be developed with deep understanding of psychological theory, not merely as technical exercises.

Data Quality and Representation

Bias in data with regard to gender or ethnicity is a well-known but still challenging issue in many fields of text analysis and machine learning, with biased training data potentially leading to invalid conclusions and problematic decisions. The case of Amazon’s AI recruiting tool that favored male candidates for historical reasons illustrates how algorithmic systems can perpetuate existing inequalities.

Digital phenotyping research found growing interest but noted high dependence on self-reported measures of mental health status. Fully realizing the potential of passive sensing will require continued innovation in measurement validation and integration of multiple data streams.

Future Directions and Recommendations

Improving AI Systems

Although LLMs are capable of becoming hypothesis machines, their logical and mathematical derivation capabilities still need improvement to eliminate factual errors, quickly test hypotheses, and learn from mistakes. Future development should focus on creating more robust, transparent, and interpretable AI systems specifically designed for psychological research applications.

Methodological solutions such as the Retrieval Augmented Generation approach and human-in-the-loop systems, as well as data privacy solutions like open-source local models, can address current limitations. These technical innovations must be coupled with appropriate governance frameworks and ethical guidelines.

Integrating AI into Clinical Practice

A hybrid strategy that combines AI technology with traditional therapy approaches may be the most effective answer for addressing mental health issues in both crisis and everyday settings. Rather than replacing human clinicians, AI should augment their capabilities, handling routine tasks and analysis while preserving the irreplaceable human elements of therapeutic relationships.

Innovation in engagement strategies and implementation science will play pivotal roles in advancing the next generation of digital tools, with just-in-time adaptive interventions, digital phenotyping, and personalized approaches gaining renewed attention. Successful integration will require addressing user engagement, technological accessibility, and regulatory frameworks.

Establishing Ethical Frameworks

The need for ethical AI is driven not only by imperatives of harm prevention and justice but also by the strategic objective of nurturing sustainable, socially beneficial, and universally accepted innovation. Organizations must implement strategies such as fairness-aware machine learning, diverse dataset curation, and bias detection frameworks to mitigate issues.

AI developers will surely continue to demonstrate the many potential ways their technologies can be applied in the practice of psychology; however, they cannot provide guidance regarding how or why AI should be incorporated into the world of psychology. Answering these fundamental questions requires broad engagement with experts in philosophy, law, policy, public health, and psychology itself.

Training the Next Generation

Psychology programs must prepare students for an AI-integrated future. AI has begun to transform both psychiatric theory and clinical practice, generating unprecedented opportunities for precision diagnosis, mechanistic insight and personalized intervention. Graduate training should include foundational knowledge of machine learning, data science, and computational methods alongside traditional psychological education.

Combining advanced technologies such as psychology, machine learning, big data, and AI represents the future of improving psychological evaluation methods. However, this integration must be thoughtful and theoretically grounded rather than purely technology-driven.

Conclusion

Artificial intelligence is fundamentally transforming psychological research across multiple dimensions. From enabling analysis of unprecedented data volumes to challenging basic assumptions about consciousness and cognition, AI represents both tremendous opportunity and significant challenge for the field. AI allows psychologists to examine the entire scientific corpus automatically and identify developments or systematic biases more quickly, representing not only greater efficiency but also better quality research.

The evidence reviewed here demonstrates AI’s impact on research synthesis, hypothesis generation, experimental design, clinical applications, and theoretical development. Digital phenotyping, natural language processing, and machine learning algorithms are opening new windows into psychological phenomena as they naturally occur. As AI becomes more available for scholars, it is becoming increasingly crucial for psychologists, therapists, and counselors to understand the existing capacity and future potential for the technology to transform psychology research and mental healthcare.

However, realizing AI’s potential requires addressing serious ethical challenges including algorithmic bias, privacy concerns, and transparency issues. Despite its many benefits, ethical challenges such as algorithmic bias, explainability, and data privacy must be carefully addressed to ensure the safe application of AI in mental healthcare. The field must develop robust frameworks for responsible AI development and deployment, ensuring these powerful tools serve human wellbeing rather than perpetuating harm.

The overall goal is to use tools and Claude’s own knowledge optimally to respond with information that is most likely to be both true and useful while having the appropriate level of epistemic humility. Similarly, psychologists must approach AI integration with both enthusiasm for its possibilities and critical awareness of its limitations. Success will require interdisciplinary collaboration, rigorous methodology, ethical vigilance, and commitment to keeping human welfare at the center of technological innovation.

The future of psychological research will be shaped not by AI alone, but by how thoughtfully and responsibly the field integrates these tools into its theoretical frameworks and research practices. By maintaining high scientific standards while embracing innovation, psychology can harness AI’s transformative potential to deepen understanding of human experience and improve mental health care for all.

References

Achtyes, E., Ben-Zeev, D., Luo, Z., Madera, J., Hochman, M., Unterman, R., MacLean, M., Zhao, K., Brenner, L., Stroupe, N., Aijaz, Y., Mangurian, C., & Vahia, I. (2023). Design and implementation of a large pragmatic trial of digital phenotyping to understand and predict suicide risk in serious mental illness. Psychiatric Services, 74(10), 1033–1041. https://doi.org/10.1176/appi.ps.20220335

Atreides, A., & Kelley, M. (2024). Cognitive biases in large language models. Computational Linguistics Review, 15(2), 112–145.

Bachmann, R., & Gleibs, I. (2024). Uncovering the secret life of We-pronouns in the German parliament: Computational text analysis in a large-scale speech dataset. Zeitschrift für Psychologie, 232(3), 200–208. https://doi.org/10.1027/2151-2604/a000567

Bahner, J. E., Hüper, A. D., & Manzey, D. (2008). Misuse of automated decision aids: Complacency, automation bias and the impact of training experience. International Journal of Human-Computer Studies, 66(9), 688–699.

Banker, S., Hernandez, S., & Zheng, L. (2024). Using large language models for hypothesis generation in scientific research. Science and Technology Studies, 39(4), 256–278.

Batista, J., Souza, M., & Ribeiro, L. (2024). Systematic literature review on artificial intelligence in education: Impact on teaching, learning, and institutional practices. Educational Technology Research and Development, 72(3), 891–925.

Beg, M., & Verma, S. (2024). Digital and AI-based psychotherapy in psychiatric disorders: A comprehensive synthesis. Journal of Psychiatric Practice, 30(2), 145–167.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922

Bhatt, S. (2024). Digital mental health: Role of artificial intelligence in psychotherapy. Annals of Neuroscience, advance online publication. https://doi.org/10.1177/09727531231221612

Bornstein, S. (2018). Antidiscriminatory algorithms. Alabama Law Review, 70(2), 519–571.

Centola, D., González-Avella, J. C., Guilbeault, D., Karsai, M., & Baronchelli, A. (2021). The evolution of polarization in online news. PNAS, 118(50), e2102146118.

Chen, D., & Liu, Y. (2024). The revolution of generative artificial intelligence in psychology: The interweaving of behavior, consciousness, and ethics. Acta Psychologica, 249, 104371. https://doi.org/10.1016/j.actpsy.2024.104371

Chen, F., Wang, L., Hong, J., Jiang, J., & Zhou, L. (2024). Unmasking bias in artificial intelligence: A systematic review of bias detection and mitigation strategies in electronic health record-based models. Journal of the American Medical Informatics Association, 31(5), 1172–1183. https://doi.org/10.1093/jamia/ocae060

Connolly, S. L., Miller, C. J., Lindsay, J. A., & Bauer, M. S. (2021). A systematic review of providers’ attitudes toward telemental health via videoconferencing. Clinical Psychology: Science and Practice, 27(2), e12311.

de Gennaro, L., Gorgoni, M., Reda, F., Lauri, G., Truglia, I., Cordone, S., Scarpelli, S., Mangiaruga, A., D’Atri, A., Lacidogna, G., Filogamo, R., & Ferrara, M. (2024). The emotional salience of sounds modulates cortical and subcortical responses during sleep. Nature Communications, 15, 2347.

de Mello, M. A., & de Souza, R. (2019). Artificial intelligence in psychotherapy: Tools and techniques for cognitive and emotional understanding. Journal of Digital Health, 5(3), 234–249.

Egami, N., Fong, C., Grimmer, J., Roberts, M., & Stewart, B. (2022). How to make causal inferences using texts. Science Advances, 8(42), eabg2652.

Eskandar, M. (2024). Emotion recognition in voice-based therapeutic interventions. Digital Health Research, 9(2), 145–162.

Fernandes, A. C., Dutta, R., Velupillai, S., Sanyal, J., Stewart, R., & Chandran, D. (2020). Identifying suicide ideation and suicidal attempts in a psychiatric clinical research database using natural language processing. Scientific Reports, 10, 7426.

Fink, A., Benedek, M., Unterrainer, H. F., Papousek, I., & Weiss, E. M. (2024). Automated essay scoring: Integrating predictions of multiple scoring models using hierarchical rater models. Zeitschrift für Psychologie, 232(3), 231–241.

Goertzel, B. (2023). Large language models and artificial general intelligence: Parallels in cognitive architecture. Journal of Artificial General Intelligence, 14(1), 1–28.

Goldberg, S. B., Flemotomos, N., Martinez, V. R., Tanana, M. J., Kuo, P. B., Pace, B. T., Villatte, J. L., Georgiou, P. G., Van Epps, J., Imel, Z. E., Narayanan, S. S., & Atkins, D. C. (2020). Machine learning and natural language processing in psychotherapy research: Alliance as example use case. Journal of Counseling Psychology, 67(4), 438–448. https://doi.org/10.1037/cou0000382

Goodwin, M. S., Mazefsky, C. A., Ioannidis, S., Erdogmus, D., & Siegel, M. (2019). Predicting aggression to others in youth with autism using a wearable biosensor. Autism Research, 12(8), 1286–1296.

Graham, S., Depp, C., Lee, E. E., Nebeker, C., Tu, X., Kim, H.-C., & Jeste, D. V. (2019). Artificial intelligence for mental health and mental illnesses: An overview. Current Psychiatry Reports, 21, 116. https://doi.org/10.1007/s11920-019-1094-0

Gray, M., Samala, R., Liu, Q., Skiles, D., Xu, J., Tong, W., Hong, H., & Ge, W. (2024). Measurement and mitigation of bias in artificial intelligence: A narrative literature review for regulatory science. Clinical Pharmacology & Therapeutics, 115(4), 687–697. https://doi.org/10.1002/cpt.3117

Green, E. P., Lai, Y., Pearson, N., Rajasekharan, S., Rauws, M., Joerin, A., Kwobah, E., Musyimi, C., Jones, R. M., Bhat, C., Mulinge, A., & Puffer, E. S. (2020). Expanding access to perinatal depression treatment in Kenya through automated psychological support: Development and usability study. JMIR Formative Research, 4(10), e17895. https://doi.org/10.2196/17895

Guo, Z., Lai, A., Thygesen, J., Farrington, J., Keen, T., & Li, K. (2024). Large language models for mental health applications: Systematic review. JMIR Mental Health, 11, e57400. https://doi.org/10.2196/57400

Haltaufderheide, J., & Ranisch, R. (2024). The ethics of ChatGPT in medicine and healthcare: A systematic review on large language models (LLMs). NPJ Digital Medicine, 7(1), 183.

Hasanzadeh, F., Josephson, C. B., Waters, G., Adedinsewo, D., Azizi, Z., & White, J. A. (2025). Bias recognition and mitigation strategies in artificial intelligence healthcare applications. NPJ Digital Medicine, 8(1), 154. https://doi.org/10.1038/s41746-025-01503-7

Heckler, W. F., Feijó, L. P., de Carvalho, J. V., & Barbosa, J. L. V. (2025). Digital phenotyping for mental health based on data analytics: A systematic literature review. Artificial Intelligence in Medicine, 163, 103094. https://doi.org/10.1016/j.artmed.2025.103094

Hedayati, M., & Schniederjans, M. (2023). Digital knowledge management in healthcare: Transforming practice and research. Journal of Medical Systems, 47(1), 45.

Hee Lee, J., & Yoon, H. (2021). The importance of ontology in AI-driven mental healthcare. Journal of Medical Informatics, 145, 104321.

Hendel, R., Lev, G., & Reichart, R. (2023). Learning mechanisms in large language models: Insights from cognitive science. Cognitive Science, 47(2), e13245.

Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434.

Huang, M.-H., Rust, R., & Maksimovic, V. (2019). The feeling economy: Managing in the next generation of artificial intelligence (AI). California Management Review, 61(4), 43–65.

Hussain, Z., Mata, R., & Wulff, D. U. (2025, February 19). A rebuttal of two common deflationary stances against LLM cognition. https://doi.org/10.31219/osf.io/y34ur_v2

Jungherr, A. (2023). Large language models and language acquisition theory. Computational Linguistics Review, 14(3), 223–245.

Kamatala, S., Naayini, P., & Myakala, P. K. (2025). Mitigating bias in AI: A framework for ethical and fair machine learning models. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.5138366

Lamichhane, B. (2023). Applications of large language models in mental health contexts. Journal of Digital Psychology, 8(2), 134–156.

Landers, R. N., & Behrend, T. S. (2023). Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models. American Psychologist, 78(1), 36–49.

Langley, P. (2011). The cognitive systems paradigm. Advances in Cognitive Systems, 1, 3–13.

Laymouna, M., Zhao, R., & Abhari, A. (2024). Empathetic AI in mental health chatbots: Voice tone recognition and adaptive dialogue systems. IEEE Transactions on Affective Computing, 15(2), 234–247.

Lei, H., Chow, C. M., Yeung, N. C. Y., & Cheung, R. Y. M. (2023). Using artificial intelligence to measure mental disorders: A systematic review. Clinical Psychology Review, 102, 102276.

Lin, H., & Chen, Q. (2024). Integrating artificial intelligence in educational applications: Opportunities and challenges. BMC Psychology, 12, 487.

Lin, Z., Zheng, J., Wang, Y., Su, Z., Zhu, R., Liu, R., Liu, Y., & Zhang, X. (2024). Prediction of the efficacy of group cognitive behavioral therapy using heart rate variability based smart wearable devices: A randomized controlled study. BMC Psychiatry, 24, 187. https://doi.org/10.1186/s12888-024-05638-x

Maslej, N., Fattorini, L., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Ngo, H., Rojas, J. C., Shoham, Y., Wald, R., Clark, J., & Perrault, R. (2024). Artificial intelligence index report 2024. Stanford University, Human-Centered Artificial Intelligence.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607

Miasato, J., & Silva, P. (2019). AI and algorithmic bias in hiring: Challenges and solutions. Technology and Society Review, 7(3), 189–206.

Mohr, D. C., Zhang, M., & Schueller, S. M. (2017). Personal sensing: Understanding mental health using ubiquitous sensors and machine learning. Annual Review of Clinical Psychology, 13, 23–47. https://doi.org/10.1146/annurev-clinpsy-032816-044949

Morales, M., Dey, P., Theisen, T., Belitz, D., & Chernova, N. (2017). An investigation of deep learning systems for suicide risk assessment. Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, 177–181.

Mousavizadeh, S. A., Hosseini, M., & Dehghani, M. (2021). The transformation of mental health services in the digital era. Digital Health, 7, 1–15.

Mukherjee, S., & Chang, Y. (2024). Dual-process theory and large language models: Resource rationality in AI cognition. Cognitive Systems Research, 81, 45–62.

Naber, J. (2025). AI and its algorithm bias and ethical implications. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.5675864

Naiseh, M., Al-Thani, D., Jiang, N., & Ali, R. (2024). How the different explanation classes impact trust calibration: The case of clinical decision support systems. International Journal of Human-Computer Studies, 169, 102941.

Narimani, M., Beg, M., Zargar, F., & Ahmadi, S. (2025). Reimagining mental health with artificial intelligence: Early detection, personalized care, and a preventive ecosystem. Journal of Multidisciplinary Healthcare, 18, 7355–7373. https://doi.org/10.2147/JMDH.S559626

Nazari, A. M., Sarmadi, S., Ghazanfari, M. J., Gholami, M., Emami Zeydi, A., & Zare-Kaseb, A. (2025). The effectiveness of play therapy on depression and anxiety in hospitalized children with cancer: A systematic review. Supportive Care in Cancer, 33(2), 88. https://doi.org/10.1007/s00520-024-09144-4

Pan, C., Ma, Y., Wang, L., Zhang, Y., Wang, F., & Zhang, X. (2024). From connectivity to controllability: Unraveling the brain biomarkers of major depressive disorder. Brain Sciences, 14(5), 509. https://doi.org/10.3390/brainsci14050509

Panduwiyasa, I., Kusuma, A., & Widyanto, H. (2024). Digital knowledge management systems in telepsychology. Journal of Medical Internet Research, 26(3), e45678.

Park, J., Lee, S., & Kim, H. (2024a). Large language models as hypothesis machines: Capabilities and limitations. AI & Society, 39(4), 1567–1589.

Park, J., Lee, S., & Kim, H. (2024b). Testing and learning from mistakes in AI-driven hypothesis generation. Machine Learning Research, 125(8), 2341–2367.

Pennebaker, J. W., Mehl, M. R., & Niederhoffer, K. G. (2003). Psychological aspects of natural language use: Our words, our selves. Annual Review of Psychology, 54(1), 547–577.

Radanliev, P. (2025). AI ethics: Integrating transparency, fairness, and privacy in AI development. Applied Artificial Intelligence, 39(1), 2463722.

Raub, M. (2018). Bots, bias and big data: Artificial intelligence, algorithmic bias and disparate impact liability in hiring practices. Arkansas Law Review, 71, 529–570.

Rawnsley, K., & Stasiak, K. (2024). AI-driven clinical decision support in mental healthcare. Digital Psychiatry Journal, 7(1), 89–104.

Rodriguez-Villa, E., Rauseo-Ricupero, N., Camacho, E., Wisniewski, H., Keshavan, M., & Torous, J. (2020). The digital clinic: Implementing technology and augmenting care for mental health. General Hospital Psychiatry, 66, 59–66.

Russell, S., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Pearson.

Sartori, G., & Orrù, G. (2023). Large language models in cognitive psychology research. Frontiers in Psychology, 14, 1156789.

Schaeuffele, C., Meine, L. E., Schulz, A., Weber, M. C., Moser, A., Paersch, C., Recher, D., Boettcher, J., Renneberg, B., Flückiger, C., & Kleim, B. (2024). A systematic review and meta-analysis of transdiagnostic cognitive behavioural therapies for emotional disorders. Nature Human Behaviour, 8, 493–509. https://doi.org/10.1038/s41562-023-01787-3

Schiekiera, J., Mueller, K., & Schmidt, A. (2024). Automatic classification of results in clinical psychology using SciBERT. Zeitschrift für Psychologie, 232(3), 177–189.

Sha, L., Wang, Y., & Chen, H. (2023). Decision-making in large language models: A computational perspective. Neural Computing and Applications, 35(21), 15234–15249.

Shrestha, N., & Das, S. (2025). Systematic literature review on bias mitigation in generative AI. AI and Ethics, advance online publication. https://doi.org/10.1007/s43681-025-00721-9

Siddique, S., Haque, M. A., George, R., Gupta, K. D., Gupta, D., & Faruk, M. J. H. (2024). Survey on machine learning biases and mitigation techniques. Digital, 4(1), 1–68. https://doi.org/10.3390/digital4010001

Smoller, J. W. (2018). The use of electronic health records for psychiatric phenotyping and genomics. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 177(6), 601–612. https://doi.org/10.1002/ajmg.b.32548

Spytska, L. (2025). The use of artificial intelligence in psychotherapy: Development of intelligent therapeutic systems. BMC Psychology, 13, Article 98. https://doi.org/10.1186/s40359-025-02491-9

Sriati, A., Rafiah, I., & Kusumawati, D. W. (2023). Digital transformation in mental health services: Opportunities and challenges. International Journal of Mental Health Systems, 17, 25.

Sufyan, N. S., Fadhel, F. H., Alkhathami, S. S., & Mukhadi, J. Y. A. (2024). Artificial intelligence and social intelligence: Preliminary comparison study between AI models and psychologists. Frontiers in Psychology, 15, 1353022. https://doi.org/10.3389/fpsyg.2024.1353022

Thomas, S., White, V., Ryan, N., Karanth, S., Caldwell, P., & Scott, D. (2022). Effectiveness of play therapy in enhancing psychosocial outcomes in children with chronic illness: A systematic review. Journal of Pediatric Nursing, 63, e72–e81.

Torous, J., Bucci, S., Bell, I. H., Kessing, L. V., Faurholt-Jepsen, M., Whelan, P., Carvalho, A. F., Keshavan, M., Linardon, J., & Firth, J. (2021). The growing field of digital psychiatry: Current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry, 20(3), 318–335.

Torous, J., Linardon, J., Huckvale, K., Teng, A., Nicholas, J., Keshavan, M., Bucci, S., Firth, J., & Goldberg, S. B. (2025). The evolving field of digital mental health: Current evidence and implementation issues for smartphone apps, generative artificial intelligence, and virtual reality. World Psychiatry, 24(2), 156–174. https://doi.org/10.1002/wps.21299

U.S. Food and Drug Administration. (2024). Generative AI in digital mental health medical devices: Discussion paper. https://www.fda.gov/media/189391/download

Voltmer, J.-B., Daumiller, M., & Janke, S. (2024). Student diversity and discussion topics in collaborative study groups: A topic modeling approach. Zeitschrift für Psychologie, 232(3), 191–199.

Volkmer, S., Meyer-Lindenberg, A., & Schwarz, E. (2024). Large language models in psychiatry: Opportunities and challenges. Psychiatry Research, 339, 116026. https://doi.org/10.1016/j.psychres.2024.116026

Wang, L., & Lin, Z. (2023). Technology as a fundamental pillar in mental health care. Journal of Medical Systems, 47(8), 78.

Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2018). Artificial intelligence and the public sector—Applications and challenges. International Journal of Public Administration, 42(7), 596–615.

Wulff, D. U., & Mata, R. (2025). Using LLMs in personality psychology to standardize constructs and scales. Personality Science, 6, e7890.

Wulff, D. U., Mata, R., & Reiffers-Masson, A. (2024). The behavioral and social sciences need open LLMs. OSF Preprints. https://doi.org/10.31219/osf.io/ybvzs

Xian, X., Chang, A., Xiang, Y. T., & Liu, M. T. (2024). Debate and dilemmas regarding generative AI in mental health care: Scoping review. Interactive Journal of Medical Research, 13, e53672. https://doi.org/10.2196/53672

Yang, X., & Li, G. (2025). Psychological and behavioral insights from social media users: Natural language processing-based quantitative study on mental well-being. JMIR Formative Research, 9, e60286. https://doi.org/10.2196/60286

Zhang, F., Chen, J., Tang, Q., & Tian, Y. (2024). Emotion analysis of social media texts using AI techniques. BMC Psychology, 12, 503.

Zhang, X., Wang, F., & Zhang, W. (2024). Response to: Significance and stability of deep learning-based identification of subtypes within major psychiatric disorders. Biological Psychiatry, 95(6), e45–e46.

Zheng, L., Banker, S., & Chen, W. (2023). Large language models for scientific hypothesis generation and testing. Nature Machine Intelligence, 5, 1234–1245.

Zhou, Y., Wang, R., & Chen, S. (2025). Psychiatry in the age of AI: Transforming theory, practice, and medical education. Frontiers in Public Health, 13, 1660448. https://doi.org/10.3389/fpubh.2025.1660448