
You are invited to take part in a research study conducted by Jie Chen, a graduate student under the direction of Professor Hong Guo and Professor Xiqing Sha in the W. P. Carey School of Business, Department of Information Systems, at Arizona State University.
The purpose of this study is to examine how varying levels of transparency in generative AI (GenAI) systems influence users' trust, engagement, and perceived risk when interacting with an AI assistant on a career-relevant task. Because summer internship recruiting is currently active, this task is designed to be directly useful to you as a participant.
If you agree to participate, you will complete three stages:
The total time is approximately 30 minutes.
⚠️ Important: Please do not include any personally identifiable information (such as your name, student ID, address, or other identifying details) in any responses or content entered into the AI assistant. For the duration of this study, you should not use any AI tools other than the one provided as part of the experiment.
You will receive a baseline 2 bonus points added to your final course grade upon completion of the study. An additional 1 bonus point will be awarded if your report ranks among the top 20% of submissions as evaluated by expert graders. The maximum bonus is 3 points.
Your participation is entirely voluntary. You may skip any question or stop participating at any time without penalty. Please note that data you enter into the generative AI career assistant cannot be withdrawn once submitted, as it becomes part of the AI interaction logs that cannot be separated from system-level data. You must be 18 years or older to participate.
The results of this study will only be reported in aggregate form (group summaries). Individual responses will not be identified in any reports, presentations, or publications. De-identified data collected as part of this study may be included in replication files that are sometimes required by academic conferences or journals. These files will not contain any names, IDs, or other identifying information, and will only be used for scientific replication and verification purposes. Please do not include any personal identifiable information into the generative AI career assistant.
Risks are minimal. There is also a small privacy risk related to online data collection.
There are no major direct benefits. You may gain experience exploring career information with an AI tool, and your participation will help researchers design more transparent and trustworthy AI systems.
Your information will be kept confidential. Data will be de-identified and stored securely on ASU servers. Identifiable information (such as emails) will only be used for account access and will not be linked to your responses. Audio recordings, if collected, will be stored securely and de-identified before analysis. Results will only be reported in aggregate form in publications, presentations, or reports. Please note that data entered into the generative AI career assistant may be used to help train or improve the AI tool. All such data will be de-identified before analysis and stored securely.
If you have questions about this study, contact Jie Chen at jchen596@asu.edu or Professor Hong Guo at hguo@asu.edu, or Professor Xiqing Sha at Xiqing.Sha@asu.edu.
If you have questions about your rights as a research participant or feel you have been placed at risk, you may contact the Chair of the Human Subjects Institutional Review Board at Arizona State University, Office of Research Integrity and Assurance, at (480) 965-6788. Our study number is "STUDY00022982".
By clicking "I agree", you agree to the consent form and are willing to participate in this study.