UX Design for Medical Education

ROLE

Co-Design Lead

Elsevier

COMPANY

DESCRIPTION

Elsevier, a leading provider of scientific, technical, and medical information, tasked me with leading the UX design for an innovative AI Conversational Interface aimed at medical students.


1 Principal UX Designer

1 Senior UX Designer (me)

1 Senior Content

1 System Architect

1 Software Engineer

TEAM

User Research

Interface Design

Usability Testing

Prototyping

Co-Project Management

CONTRIBUTION

FIGMA FILE


12 weeks (Oct 2023- Dec 2023)

TIMELINE

UX/UI Design

High-Level Concept Visualization

User Research and Persona Development

DELIVERABLES

The goal? To ensure our conversational interface works for final-year medical students and junior residents. We dove deep into user research and refined our designs repeatedly, aiming to create something that’s appealing and practical.

CONTEXT


BACKGROUND

Elsevier considers GenAI chat integration to make medical education more interactive and responsive to student needs.


CHALLENGE

Elsevier's current platform lacks a dynamic GenAI interface for engaging students in interactive clinical case studies.


AI-Powered Study Mentor Interface

Developed a prototype integrating an AI conversational interface into the user experience, enabling medical students to interactively navigate clinical case studies with an intelligent GenAI study mentor.

SOLUTION


USER RESEARCH & FINDNGS

To evaluate the potential of a conversational AI interface in enhancing medical education, we engaged in comprehensive user research with our target demographic: final-year medical students and junior residents. Our research aimed to validate the concept's desirability.

Methodology: A Hybrid Approach

We employed a mixed-methods approach, blending qualitative and quantitative research tactics:

  1. In-Depth Interviews: Conducted to capture the nuanced perspectives and expectations of students towards AI in their education.

  2. Surveys: Distributed to gain a broader understanding of current usage patterns of similar technologies.


KEY INSIGHTS

Current Usage Patterns:

  • Many students already utilized digital resources, indicating a familiarity and openness to tech-based learning aids.

  • Popular platforms like Amboss and Pass Medicine provided valuable learning frameworks, setting a high bar for our AI solution.

Desirability and Expectations:

  • Conversational AI was perceived as highly desirable, primarily for its potential to simulate realistic patient interactions and facilitate clinical decision-making.

  • Students expected a seamless, proactive experience that would not only respond to queries but also guide their learning journey.

Interaction Preferences:

  • The type of input varied, with some students preferring concise questions and others leaning towards detailed exploratory discussions.

  • Responses were expected to be concise yet comprehensive, with an friendly, helpful, and supportive tone and provision of citations where applicable.


THE PROCESS

The AIM and the 'How Might We' Questions

Our mission was to create an AI tool loved by students and endorsed by educators, addressing their unique needs in the learning process.


DESK RESEARCH

Desk Research and Ideation Workshops

Leveraging insights from our team and existing AI solutions, we crafted four distinct concepts responding to our targeted 'How Might We' questions.


DESK RESEARCH

We sketched user profiles and storyboards to visualize AI's potential impact.


ROUND 1: INSIGHTS THROUGH USER INTERVIEWS

Engaging with students unveiled a desire for personalization, simplicity, and integration in AI tools. Contrary to our assumptions, students were open to AI, especially when it's recommended by trusted institutions.


PART 1: LEARNING FROM FEEDBACK

We refined our AI mentors based on feedback from Round 1, focusing on what students need, creating intuitive designs, and improving through continuous testing.


PART 2: DESIGNING CONVERSATIONS

We then developed realistic interactions for the AI mentor, tailored to scenarios like pediatric asthma. This included mapping out conversation flows, scripting medical dialogues, and designing user-friendly interface wireframes.


ROUND 2: REFINEMENT AND USER FEEDBACK

We refined our AI models and interfaces for a second round of testing, gathering feedback from students and educators, confirming the appeal of our 'AI Mentor' concept for clinical studies and the 'AI Performance Tool' for classroom enhancement.


CONCLUSION

Unexpected Insights and Project Evolution

Discovering the complementary potential of both concepts, we saw a path to a unified solution. Though project priorities shifted, our groundwork has become a cornerstone for future endeavors in AI and education.