Sabid Bin Habib Pias

Human-Centered AI researcher | Graduate Research Assistant at IU Privacy Lab

I am a Ph.D. candidate in the Department of Computer Science at Luddy School of Informatics, Computing, and Engineering, Indiana University Bloomington. I am a Research Assistant at the IU Privacy Lab, advised by Professor Apu Kapadia. I am a Human-centered AI researcher with expertise in user study design, quantitative and qualitative analysis, and machine learning applications in day-to-day technologies. I am passionate about improving user trust in voice asssitants, particularly leveraging the paralinguistic attributes of Voice Agents. I have also worked on improving user understanding of machine learning outcomes through eXplainable AI (XAI) while working at Idaho National Laboratory in summer 2023. I also have a strong interest in the development of privacy-sensitive AI systems that enhance human capabilities and improve the quality of life.

I am actively looking for full-time positions at Research Scientist or Software Engineer roles, so feel free to reach out if your team is hiring!


Research

Projects


Towards the Persuasiveness of Smart Voice Assistants

We studied how user trust can be improved in Smart Voice Assistants (SVA) in their online shopping decisions. Through an interactive study, we captured users' trust in synthesized voices with varying emotional tones.

Methods: Voice Synthesis, Quantitative analysis, Interactive user study design

Tools: Python, R, Microsoft Speech Studio, Qualtrics

Responsibilities

  • Define Project Scope: Systematically reviewed existing literature to identify usability and trustworthiness gaps in current Voice Assistant designs and define project scopes.
  • Develop Experimental Designs: Created robust experimental designs tailored to the unique requirements of each project, aligning them with the research scope.
  • Conduct Usability Studies: Designed prototypes and conduct quantitative and qualitative studies to elucidate user perceptions of proposed Voice Assistant designs.
  • Perform Data Analysis: Performed thorough data analyses to derive meaningful insights and draw conclusions from experiments.
  • Interpret and Present Findings of Studies: Frequent presentation of findings to the research team to gather additional insights from diverse perspectives.
  • Collaborate on Research Efforts: Utilized periodic feedback from the research group to continuously improve conceptual and experimental designs, ensuring the highest quality of research outcomes for Voice Assistant designs.
  • Manuscript Development: Led the writing and refinement of manuscripts for publication, synthesizing complex research findings into coherent narratives, and adapting content based on constructive feedback from the research group.

Evaluating User Trust and Agreement with Explainable AI

In this study, we compare non-expert users' trust and agreement with an IA (Intelligent Agent) when the IA interprets it’s image classification outcomes with varying adverserial explanations. Through an interactive study design, we evaluate user behavior in response to textual, audio and visual explanations provided by the IA classifier.

Methods: Quantitative analysis, Interactive user study design

Tools: Python, R, Qualtrics

Decaying Photos for Enhanced Privacy

We proposed two temporal redaction methods while posting photos on social media. These methods involved sensitive contents of a posted photo being decayed gradually so that the contents are not identifiable after a certain period. We quantitatively compared user behavior towards the proposed methods with existing sharing mechanisms and found 17-21% participants preferred the proposed methods when the photos did not contain any identifiable information or the photos were shared with close contact.

Methods: Quantitative analysis, User study design

Tools: R, Qualtrics

Publication: CSCW'22


Industry Experience

Internship


Idaho National Laboratory

Python Programmer Intern, Summer 2023

Worked on Machine learning and eXplainable AI (XAI) with a focus on improving User Interface reliability and trust for skilled operators.

Methods: eXplainable AI (XAI), Software Development, Data Visualization, Usability Study Design

Tools: Python, LIME, SHAP, PyQT

Responsibilities

  • Formulated an Execution Plan for an Explainable AI Project: Devised a comprehensive plan for implementing Explainable AI within a water pump fault prediction software. This included strategically enhancing Explainable AI techniques to improve the interpretability of machine learning predictions.
  • Collaborated on Interface Design with the Human Factor Team: Worked closely with the human factor team to create questionnaires for experts and operators. The goal was to uncover their requirements for interface design, ensuring clarity in understanding machine learning prediction outcomes.
  • Facilitated Iterative Development of Explainable AI Interface: Through multiple planning and development cycles, incorporating feedback from mentors, I successfully crafted an efficient Explainable AI interface for the water pump fault prediction software with LIME and SHAP. This interface not only enhanced the interpretability of machine learning outcomes but also empowered operators to make well-informed decisions.
  • Presented a poster at the intern poster session [Poster]

Publication

  • Sabid Bin Habib Pias, Imtiaz Ahmad, Taslima Akter, Adam J. Lee, and Apu Kapadia. Decaying Photos for Enhanced Privacy: User Perceptions Towards Temporal Redactions and Trusted Platforms. In ACM Conference On Computer-Supported Cooperative Work (CSCW'22)
  • Sabid Bin Habib Pias, Ran Huang, Donald Williamson, Minjeong Kim, and Apu Kapadia.The Impact of Perceived Tone, Age, and Gender on Voice Assistant Persuasiveness in the Context of Product Recommendations. In ACM Conference on Conversational User Interface, CUI 2024
  • Sabid Bin Habib Pias, Alicia Freel, Timothy Trammel, Taslima Akter, Donald Williamson, and Apu Kapadia. The Drawback of Insight: Detailed Explanations Can Reduce Agreement with XAI. In ACM CHI Workshop on Human Centered Explainable AI, CHI 2024
  • Alicia Freel, Sabid Bin Habib Pias, Selma Sabanovic, Apu Kapadia. Navigating Trust Erosion in Human-AI Collaboration: Unpacking the Impact of Severity and Timing in Misclassification. In ACM CHI Workshop on Trust and Reliance in Evolving Human-AI Workflows, CHI 2024


Curriculum Vitae

Click here to download my CV