Sabid Bin Habib Pias

User Experience Research | Human-Centered AI
Hi, I'm a UX Researcher and Ph.D. candidate exploring the intersection of human behavior and Artificial Intelligence. I am pursuing my Ph.D. in Computer Science and Human-Computer Interaction at Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington. Additionally, I serve as a Research Assistant, advised by Professor Apu Kapadia.
I specialize in understanding how conversational AI shapes the way people make decisions in their daily digital interactions. My research combines the rigor of academic inquiry with real-world application. I use mixed-methods approaches including experimental design, behavioral analysis, and usability testing to uncover insights that make technology more human-centered and trustworthy. My expertise includes designing and conducting user research, statistical and qualitative analysis, and applied machine learning. During my 2023 internship at Idaho National Laboratory, I designed and evaluated an explainable AI interface that helps users understand complex algorithmic decisions. Before starting Ph.D., I spent 2.5 years leading android development and user research at a startup, where I learned firsthand how user insights drive product success.
I'm passionate about bridging the gap between cutting-edge AI capabilities and ethical, user-centered design. Through my work, I aim to ensure that as our digital tools become smarter, they also become more trustworthy and beneficial for everyone who uses them.

Research Values

Justification: I ensure that every research question addresses a genuine need wih appropriate methodology. Research without strong motivation creates minimal impact and wastes stakeholder resources.
Integrity: I maintain rigorous standards in methodology and data presentation. Without integrity, research lacks credibility and cannot achieve sustainable impact.
Impact: Finally, I focus on research that creates meaningful change or establishes foundations for future progress.

My research values are interconnected. Justification ensures relevance, integrity ensures validity, and impact ensures value. I believe that these values, together, drive research that matters.

I am actively seeking full-time opportunities as a User Experience Researcher and Research Scientist. Please feel free to reach out at sabidbinhabib <at> gmail <dot> com if your team is hiring.



News and Updates

  • May 2025: Served as an Associate Chair (AC) at ACM CSCW 2025 Posters
  • April 2025: Full-track Paper accepted in ACM Conference on Fairness, Accountability, and Transparency (FAccT) 2025
  • April 2025: Workshop paper accepted in ACM CHI 2025 Workshop on Sociotechnical AI Governance Opportunities and Challenges for HCI

  • November 2024: Chaired the "Conversations with Machines" paper session at ACM CSCW 2024.
  • November 2024: Served as a student volunteer at ACM CSCW 2024.
  • November 2024: Received special recognitions for outstanding reviews of full track papers at ACM CHI 2025
  • July 2024: Received the best paper award for the paper "The Impact of Perceived Tone, Age, and Gender on Voice Assistant Persuasiveness"
  • July 2024: Presented full track paper at ACM Conversational User Interfaces (CUI) 2024
  • July 2024: Served as a Program Committee (PC) member at EuroUSEC 2024
  • June 2024: Defended dissertation proposal defense
  • May 2024: Received Luddy Research Excellence award for the academic year 2023-24
  • May 2024: Presented my Explainable AI Research at the ACM CHI 2024 Workshop on Human-Centered Explainable AI (HCXAI)
  • May 2024: Paper accepted in ACM Conversational User Interfaces (CUI) 2024
  • May 2024: Served as a student volunteer at ACM CHI 2024
  • April 2024: Workshop paper accepted in ACM CHI 2024 Workshop on Human-Centered Explainable AI (HCXAI)
  • April 2024: Workshop paper accepted in ACM CHI 2024 Workshop on Trust and Reliance in Evolving Human-AI Workflows (TREW)

  • November 2023: Received special recognitions for outstanding reviews of full track papers at ACM CHI 2024
  • October 2023: Served as a student volunteer at ACM CSCW 2023
  • September 2023: Received Cognizant Trust and Safety Scholarship for research in trustworthy AI
  • August 2023: Presented at Idaho National Laboratory intern poster session
  • June 2023: Started summer internship at Idaho National Laboratory as an XAI research intern
  • April 2023: Received special recognitions for outstanding reviews of full track papers at ACM CUI 2023

  • November 2022: Presented full-track paper "Decaying Photos for Enhanced Privacy" at ACM CSCW 2022
  • May 2022: Served as a student volunteer at ACM CHI 2022
  • April 2022: Passed Ph.D. Qualifying exam
  • April 2022: Paper accepted at ACM CSCW 2022

Expertise

  • Programming Language and Frameworks: Python, R, Javascript, Flask, React, LIME, SHAP
  • Deep Learning Tools: PyTorch, Tensorflow, LangChain, Scikit-Learn
  • Research Method: Interviews, Focus Group Study, Online Survey, Mixed Methods, Card Sorting, Tree Testing, Usabilty Testing, A/B Testing
  • Machine Learning and Large Language Model: Regression, CNN, RNN, Random Forest, LLM Fine-tuning, LoRA, RAG
  • Statistical Analyses: Correlation, T-test, Regression, Mixed Effect Model, Pairwise Comparison, Non-parametric Tests, Factor Analyses, Clustering, Mediation Analysis
  • Other Tools: Git, SQL, REST API, GPT API, Amazon Skills, Tool Calling, Microsoft Speech Studio, Qualtrics, Overleaf, Zotero

Full-track Papers


Human-centered AI


Towards the Persuasiveness of Conversational Assistants' Vocal Tone

ACM Conversational User Interface 2024 (CUI) | Best Paper Award | Lead Author

Suppose you are rushing to complete morning chores while asking your digital assistant to read your day's schedule. Would you register the important deadline when it responds in a flat, mechanical voice listing your appointments? Or would you be more attentive when the same assistant delivers your schedule in a warm, conversational tone that emphasizes priorities? This striking difference in response to identical information raises a compelling research question: how do voice assistants' tonal qualities and speaking patterns influence not just your satisfaction with the technology, but your actual decision-making processes and willingness to follow their guidance?
This research explores whether voice assistants-like Alexa or Siri-can have a similar impact. Specifically, we investigated how the tone, age, and gender of a voice assistant influence whether people follow its product recommendations. Our experiment demonstrated that participants preferred positive and neutral tones from the agents and recommended that tone be customized based on the task and context. We also found a preference for stereotypical inclinations toward younger female and older male voices. This research provides insights into how these vocal qualities shape user decisions, which can help designers create voice technologies that are not only more effective but also more ethical.
Importantly, this study was conducted before large language models (LLMs) like ChatGPT introduced advanced voice features to the public. At that time, commercial voice assistants relied on fixed, pre-designed voices, making our findings especially relevant for understanding how specific vocal qualities shape user trust and engagement. Now, as LLM-powered voice assistants enable more natural, continuous, and human-like conversations, this research provides foundational insights for designing these new systems. By identifying which voice characteristics incite persuasiveness and comfort, our work guides the creation of user-centered, ethical voice interactions. This is crucial for the next generation of human-LLM interaction in the voice modality, where the ability to adapt voice features can be utilized to enhance user experience and trust.

Methods: Interactive user study, A/B Testing, Statistical analysis, Qualitative Analysis, Voice Synthesis
Tools: Python, R, Javascript, Microsoft Speech Studio, Qualtrics

Not All Errors Are Equal: How Timing and Severity of AI Errors Impact User Trust

ACM Conference on Fairness, Accountability, and Transparency 2025 (FAccT) | Second Author

Have you ever relied on AI to automate repetitive yet critical tasks, such as screening images to ensure they're safe for children? Imagine your trusted AI mistakenly labels an inappropriate image as safe. Would your trust remain the same, or would doubts start to creep in?
Inspired by these motivations, we investigated how the timing and severity of AI errors shape user trust. We discovered that a severe mistake late in an AI’s process, especially in high-stakes areas like defense or social media moderation-can quickly erode trust, even if the AI performed flawlessly before. Smaller or early errors are more easily forgiven if the system proves reliable afterward. This pattern matters because trust is essential for people to accept, rely on, and benefit from AI systems. If trust is lost, users may misuse, over-rely on, or reject helpful technology.
As AI becomes more common in critical decision-making, understanding and regulating these trust patterns is crucial for safety, transparency, and ethical use. Our findings provide practical guidance for building AI systems that align with human expectations, promote responsible adoption, and support regulatory oversight-ensuring trust becomes a strength, not a vulnerability, in human–AI collaboration

Methods: Online User Study, Statistical analysis, Qualitative Analysis, A/B Testing
Tools: Python, R, Javascript, Qualtrics

LLM-Generated Privacy Nudges in Social Media Decision-Making

Under Review | Lead Author

Imagine scrolling through social media and finding a funny, embarrassing photo that reminds you of your friends-you want to share it, but how would you feel if you were the person in the photo? Our research explores whether large language models (LLMs) can provide timely, gentle reminders before sharing, helping users make more considerate decisions without being intrusive. We found that brief, clear prompts and personalized guidance are most effective, empowering users to share thoughtfully and protect privacy.
Methods: Mixed Method Study, User Study Design, Statistical Analyses, Qualitative Analyses
Tools: OpenAI API, Python, R, Qualtrics

Usable Security and Privacy


Decaying Photos for Enhanced Privacy

ACM Conference on Computer-Supported Cooperative Work and Social Computing 2024 (CSCW) | Lead Author

Suppose you’re at a family gathering, and everyone wants to take group photos to remember the day. Some people are happy to share every picture right away, while others worry about a messy background, a bystander caught in the shot, or their house number showing. You might wonder: Is there a way to share these memories while also respecting everyone’s privacy and keeping sensitive details safe, even after the photos are shared?
This is the kind of challenge today’s social media users face: balancing the joy of sharing with the need to protect personal and bystander privacy. Traditional solutions like instantly covering up sensitive parts of a photo can make the image less enjoyable, while apps that delete photos after a short time (like Snapchat) do not let people preserve memories.
We explore a new approach called "temporal redactions," where sensitive parts of a photo are gradually or later hidden after sharing, letting people enjoy the full image at first but protecting privacy over time. By studying how people feel about these techniques across different audiences (like family, friends, or the public), we found that many users prefer having more flexible and context-aware privacy options,especially for photos with personal details or bystanders.
This research provides insights to design better privacy tools for social platforms. It shows that privacy isn’t one-size-fits-all: people want control over what’s visible, for how long, and to whom. By understanding these preferences, platforms can offer smarter, more user-friendly ways to share memories without sacrificing privacy-empowering users to share confidently in a digital world.

Methods: Quantitative Study, User interface design
Tools: R, Python, Javascript, Qualtrics

Workshop Papers and Extended Abstracts


Personality and Explainability: How Detailed Explanations Impact Agreement with XAI

ACM CHI 2024 Workshop on Human-Centered Explainable AI (HCXAI) | Lead Author

Suppose you’re attending a cooking class, and the chef is teaching everyone how to prepare a complex dish. Some students want to know every detail-why each ingredient is added, the science behind each step, and how to fix mistakes along the way. Others just want clear, simple instructions so they don’t get overwhelmed. Why would some people crave detailed explanations while others prefer to keep things straightforward?
We investigated people's preference and agreement with AI explanations motivated by this phenomenon. We found that people who are more anxious or less comfortable with technology actually AI interfaces more when it gives short, simple answers instead of long explanations. Meanwhile, people who are more organized or comfortable with tech might appreciate more detail. The key takeaway is that not everyone wants or benefits from the same type of explanation-so for AI systems to be truly helpful and trustworthy, they should adjust how much they explain based on each person’s needs and comfort level.
This research provides Explainable AI design insights by showing that one-size-fits-all explanations may not serve everyone equally well, and that considering user personality and technological comfort is crucial for designing trustworthy, accessible, and user-friendly AI systems

Methods: Quantitative analysis, Interactive user study design, A/B Testing
Tools: Python, R, Qualtrics

Effect of Mistake Timing and Severity in the Trust Dynamics in Human-AI Collaboration

ACM CHI 2024 Workshop in Trust and Reliance in Evolving Human-AI Workflows (TREW) | Second Author

Think of a resturent where a chef is preparing a multi-course meal. If they burn the appetizer but serves the perfect dishe for the rest of the meal, what would you thik of the chef? If they make a minor mistake with the dessert after serving flawless courses, would you still trust them to cook for you again?
This abstract proposes an experiment to evaluate the similar impacts human-AI interactions may experience: how the timing and severity of AI errors impact trust in high-stakes systems (e.g., military, social media moderation). It also discusses ethical considerations and the potential impact of the experiment results.

Methods: User-centric Study Design
Tools: Python

Effects of Vocal Tone on the Trustworthiness of Voice Assistants

Under Review | Lead Author

Imagine you’re shopping online and need advice. You can ask two salespeople: one speaks in a warm, friendly tone, while the other sounds flat or even a bit negative. Which one would you trust more with your purchase decision-and why does their tone make such a difference?
This poster investigated the similar effect in voice assistant interactions: how the vocal tone of voice assistants (VAs) influences how attractive and trustworthy they seem during complex tasks like online shopping. The study found that VAs using positive or neutral tones were rated as more appealing and trustworthy than those with negative tones. The perceived attractiveness of the VA’s voice played a key role in building trust, while factors like the voice’s age or gender had little effect.
This research shows that thoughtful voice design can make VAs more engaging and trustworthy for users, encouraging broader adoption for complex tasks. As voice assistants become more central to daily life, designing voices that elicit genuine trust can improve user experience and help people make better decisions with confidence.

Methods: Mixed method study, Quantitative Analyses
Tools: Python, R, Javascript

Industry Experience


Idaho National Laboratory

Python Programmer Intern, Summer 2023 | Internship

Technicians at water treatment plants typically monitor numerous sensor outputs across different sections of the facility to anticipate and locate potential faults. This process can be both time-consuming and cognitively demanding. To address this challenge, I developed a machine learning application that predicts water plant faults using Random Forest, Support Vector Regression, and Neural Network, achieving an accuracy of 98%. To ensure the predictions are interpretable and actionable, I also built an explainable AI interface incorporating LIME and SHAP, enabling technicians to understand the rationale behind each prediction. To understand the usability of the XAI interface, I conducted usability testing and surveys for experts and operators. We identified increased operator satisfaction (23%) for the newly proposed interface in water pump fault prediction.

Methods: Explainable AI (XAI), Machine Learning, Software Development, Data Visualization, Usability Study Design
Tools: Python, PyTorch, LIME, SHAP, PyQT

Field Information Solutions GmbH

Software Engineer and User Experience Researcher, January 2017- May 2019 | Full Time

In many low-resource settings, representatives from distribution and agricultural procurement companies frequently visit local shops or rural farms to collect data on product demand and harvest volumes. These insights are critical for ensuring timely product delivery to retail outlets and efficient procurement from producers.
To support this process, I conducted user experience research and developed a user-centered Android application designed for field representatives to capture nuanced data both offline and online. The application desig was informed by the user research and it simplified data entry, transmission, and format customization to suit diverse organizational needs. Built as a Software-as-a-Service (SaaS) solution, the platform allowed client organizations to configure survey structures and workflows according to their specific operational contexts. The application has been deployed across inaccessible and remote areas in Bangladesh, Myanmar, Uganda, Kenya, Afghanistan, and Nepal, serving over 3,000 field representatives.

As part of this initiative, I led the effort in both user experience research and android development. My responsibilities were as follows:

  • Gathered, analyzed, and translated complex stakeholder requirements into actionable technical specifications and user stories for Six B2B clients
  • Conducted user behavior analysis and usability research for Android application optimized for low-resource environments, identifying key usage patterns and performance bottlenecks
  • Drove product strategy and feature prioritization for Android application serving over 3,000 active users in resource-constrained markets
  • Managed cross-functional collaboration between development and sales teams to ensure alignment on product objectives and delivery timelines
  • Lead and manage the Android development team that build generic Android tools for SaaS products (Field Buzz Sales)
I also led the development of the following Android modules:

Survey Container (Android, Java, Realm, REST API)
Designed a dynamic survey builder that maps large question sets to mobile forms. Integrated regex-based validation logic to ensure structured and accurate user responses aligned with survey instructions.
Product Sales Prediction (Python)
Built a supervised regression model to forecast product sales, based on a 24-month transaction dataset from field agents to inform logistics and inventory planning.
Remote Sales Tracker Builder (Android, NoSQL, REST API)
Created a modular tool to help micro-entrepreneurs construct custom sales tracking applications. Implemented a time-based synchronization mechanism to maintain data integrity in low-connectivity environments.
Inventory Updater (Android, NoSQL, REST API)
Developed a robust inventory management interface allowing representatives to update stock levels in bulk or individually. Included support for real-world complexities such as product returns, expired items, loans, advance payments, and image-based documentation.

Publications


Services


Academic Services

Conference Program Committee: ACM CSCW ' 25 Posters (Associate Chair), EuroUSEC 2025, EuroUSEC 2024
Peer Review: ACM CHI, ACM CSCW, ACM CUI, IEEE Transactions on Privacy, ACM UIST, HRI, ACM DIS, ACM IUI, ICWSM, IMX, EuroUSEC
Recognition for Outstanding Peer Review: ACM CHI, ACM CSCW, ACM CUI
Student Volunteer: ACM CHI (' 24, ' 22), ACM CSCW (' 24, ' 23, ' 22), ACM CUI ' 24
Mentoring Junior Co-chair: SOUPS 2023
Paper Session Moderator: CSCW 2024
Undergraduate Research Mentor, Indiana University: 2022-2025

Organizational and Volunteer Activities

Luddy Graduate Ambassador, Luddy School, Indiana University, 2022-25
Graduate Representative, Luddy School Student Government, 2021-22
Vice President, Bangladesh Student Association at Indiana University, 2021-22
Member and Organizer of Laboratorians Student Association of BUETians (LAB), 2014-15