2022 | OrganAI AB | Co-founder, Product Design Lead

OrganAI: Designing an AI-Enabled App for Efficient Business Scheduling

Objectives

  • Conduct thorough user testing to ensure Organ.AI effectively meets user needs and is user-friendly.

  • Raise initial funding to bring Organ.AI to market and improve the efficiency of business scheduling for professionals.

Role

I collaborated closely with the Product Director, 2 Product Managers, and 8 engineers to initiate the research process, generate ideas, and validate solutions, and successfully launch the product.

Outcome

  • Increased the Organ.AI SUS score from 57.8 to 84.7 and improved user efficiency in booking meetings by 57%.

  • Successfully raised 500k SEK in venture capital and angel funding to hire developers and bring Organ.AI to market.

Process overview

Scheduling meetings is time-consuming.

Scheduling meetings in a business environment can be a time-consuming and cumbersome process, involving multiple emails or phone calls to find a mutually available time. This can result in frustration, missed opportunities, and wasted time for busy professionals. Some of the main problems that we encountered while conducting a survey study were:

Challenge #1

Scheduling 1 meeting takes 8 emails in average

Efficient communication is key in the business world, and scheduling a meeting is no exception. On average, it takes 8 emails to successfully schedule just one meeting, highlighting the importance of clear and timely correspondence.

Challenge #2

It takes an average of 5 hours to schedule a meeting

Scheduling a meeting can be a time-consuming task, with an average of 5 hours required to coordinate schedules and confirm details. The effort put into organizing a meeting is a clear indication of the importance placed on effective communication and productive use of time.

Challenge #3

89% percent of meetings ended up with a back-and-forth email

The process of scheduling a meeting is often a back-and-forth effort, with 89% of scheduled meetings requiring multiple rounds of email correspondence. This highlights the importance of clear and timely communication to ensure successful meeting coordination.

My initial challenge - No validation / No research

When I first joined the team, we already had a design prototype, but it had not been validated. We were uncertain about whether the design would meet the needs of our users, and we needed to conduct interviews to gather feedback. Given that Organ.AI is an AI app that requires AI interaction and response, I decided to spend time implementing the prototype directly. This allowed us to quickly validate the solution and make any necessary improvements.

Our original product, PrimeHub.

Build

Building PoC to receive feedback

As a product designer, my first step upon joining the team was to quickly build a proof of concept for Organ.AI and validate the idea with potential users. To do this, I leveraged Xcode and some existing libraries and components to build the app rapidly. Through rapid prototyping and user testing methods, I gathered valuable insights and feedback that allowed us to iterate on the design and development of the final product, ensuring that it met the needs of our target users.

Our first MVP

Test

Testing and gathering feedback from users

"During the first round of testing, we conducted a beta-loop test where we asked 12 participants to connect their own calendars and use the app for a week. We collected feedback from them and identified several key issues, including a lack of feedback, unclear instructions for getting started, and time-consuming data input due to typing.

Feedback we received along the way

Workshop session to improve solution

Following the first round of testing, we held a workshop with the development team to prioritize the issues identified by the users and generate ideas for potential solutions. By working collaboratively and drawing on the feedback from the beta test, we were able to identify key features and design changes that would improve the user experience and address the concerns raised by our initial testers.

Our team :-D

Some ideas we came up with during the workshop

Final proposal that we decided to move towards with.
One is a fully text-based chatbot and the other is a fully button-based chatbot.

Fun fact #1

What’s the tip to reaching out to strangers on Linkedin?

When reaching out to strangers on the internet, my approach is to be human and think about how I would engage in a conversation with them in person. I try to ask questions that will make the person feel comfortable and willing to respond. Since I have worked for several startups that had limited resources in the beginning, I have had ample experience in reaching out to strangers online. Additionally, I make sure to tailor my approach to the specific person and their interests in order to build a connection and increase the likelihood of them responding positively.

Research

Building quick prototypes to A/B test

We generated two potential solutions during the workshop: a text-based chatbot and a button-based chatbot. Rather than spending time on building a prototype, we decided to perform A/B testing with existing tools in the market to determine which option was more effective. We utilized a messaging library to create the text-based chatbot and Landbot.io for the button-based chatbot, both of which were integrated with our AI backend to fully support our users in scheduling meetings.

The versions we came up with

Research

Testing the concept

Continuing the case study, to gather insights on the effectiveness of the two chatbot designs, I conducted a usability testing with 9 participants and collected their feedback. Additionally, we used the System Usability Scale (SUS) questionnaire to measure the overall usability score and timed how long it took for participants to complete tasks using each design. The data from this testing helped us to make an informed decision on which version to move forward with.

A message that I sent to a well-known data scientists with 20k+ followers.

Social media posts

Challenge #1

Text-based chatbot is 20% more efficient

During the study, participants were asked to complete three tasks using both prototypes. We measured the time-to-complete and found that the text-based chatbot was 20% more efficient than the button-based one.

Challenge #2

Text-based chatbot’s SUS score is 18% higher

We conducted usability testing and asked participants to fill out the System Usability Scale (SUS) survey to measure usability.

Other feedback on text-based chatbot

Typing Process Takes Too Much TimeCalendar Is Not Visible - not sure which time to pickAI Assistant Needs More PersonalityUser Flow Is Intuitive

Other feedback on button-based chatbot

Process Is Too Rigid, Takes Time to CompleteDifficult to Add New Items Less Effort on Typing

Research

Improving the design

Based on feedback from user testing and research, we made several key improvements to our scheduling assistant app. These included integrating button-based interactions into our text-based prototype, adding an auto-complete feature to minimize typing effort, and including more personalities in our AI.

Improvement 1 -  Integrated Button-Based Interaction into Text-Based Prototype

From the testing sessions in the first iteration, I learned that users preferred the text-based prototype but found that the button-based prototype saved time because they could simply select different options and tap buttons to schedule a meeting. Moreover, as the button-based prototype allowed them to choose from a list of default options, it also decreased the chance the user would type a wrong request (e.g., requests that the NLP could not recognise) and increased its usability.

Improvement 2 - Added Auto-complete Feature to Minimise Typing Effort

In addition to implementing buttons and menu options in the text-based flow, I have also tried to minimise the effort required from users by implementing a feature called auto-complete.

Improvement 3 - Personalised elements

During the first iteration, one user reported that they would trust and prefer an AI assistant that was more human-like. As a result, this version added some personality traits, a profile image, and a name to the AI assistant and included small delays with typing indicators to give users the impression that the AI assistant was thinking about its response.

Improvement 4 -  Added some tips in the beginning of the conversation to hint users what to type

In this design version, more instructions were added to the beginning of the conversation. The instructions might not be perfect for guiding them on communicating with the AI assistant.

Research

Testing with more users

After making improvements to the design based on feedback from previous usability testing, we conducted a second round of testing with participants using the improved chatbots. Our goal was to measure the effectiveness and efficiency of the changes we made and gather further insights to inform the final design of our AI scheduling assistant.

A message that I sent to a well-known data scientists with 20k+ followers.

A higher SUS score

The second round of usability testing resulted in an impressive SUS score of 84.7, a 20% increase from the previous prototype.

What they like about?

Auto Complete Reduce Time / Effort: From the interviews, we received many positive feedback on the auto complete feature, which save them a lot of time from typing. More Personality on the bot: The participants realized the personalities of the bot has improved, but they would like to have more similar personalities to bring them closer to the user.

What they dislike about?

More Button Options: The user also asks for more button options, such as providing hour options (e.g., “1 hour”, “2 hours”, etc) so that they don’t have to type the duration of the meeting manually. Hashtag is weird: The users complained that using hashtag as a trigger for recommended phrases was weird.Improvements on the design: A few participants also recommend to redesign the visuals of the app to make it more consistent and visually appealing

Design

Final improvements

After addressing the feedback we received, we conducted a final polish on the design to ensure consistency and improve the overall user experience. With the design now finalized, we handed it off to the development team for implementation.

Key Solutions - 1. Redesigned the UI of the app

We decided to redesign the whole UI before passing on to the actual development

Key Solutions - 2. Added more personality on the app, improved the tone

In addition to implementing buttons and menu options in the text-based flow, I have also tried to minimise the effort required from users by implementing a feature called auto-complete.

Key Solutions - 3. Transparency on the schedule, allow users to adjust settings easily

During the first iteration, one user reported that they would trust and prefer an AI assistant that was more human-like. As a result, this version added some personality traits, a profile image, and a name to the AI assistant and included small delays with typing indicators to give users the impression that the AI assistant was thinking about its response.

Key Solutions - 4. Confirmation Pop Up/Notification for extra security

In this design version, more instructions were added to the beginning of the conversation. The instructions might not be perfect for guiding them on communicating with the AI assistant.

Results

Raised initial funding to move forward with the development

The new design has helped us to reach a higher CSAT score because it allows customers to easily find the information they need and navigate the app with ease. The nav bar feature has been particularly well-received, as it gives users the ability to track translation status at east. This has led to an increase in user engagement and satisfaction.

Increased SUS score from 57.6 to 84.7

With the new design being released, it also makes us easier to integrate more apps with the scalable design. Up to 2023, we have released 7 more apps including Typeform, Google Play, Youtube, Marketo and many more!.

Reduced task completion time by 10%

With the new design being released, it also makes us easier to integrate more apps with the scalable design. Up to 2023, we have released 7 more apps including Typeform, Google Play, Youtube, Marketo and many more!.

Learnings

1. Everyone is a designer

As the services that VdoTok provides consists of complicated connection data, initially it was a big headache to present 50+ metrics in a webpage. To soothe this, I first tried to categorize different data into different levels and categories, and then come up with some ideas to visualize the data without taking up too much space.

2. Work closely with engineers

As the end-users of this product are the developers, I worked closely with the developers in our own company during the design process. This ensured regular feedback from the developers to improve the design.

Check other case studies

Redesigning Lokalise's Marketing Content Management Page

PipeRider: Data Observability Tool for Data Scientists

Designing a User Feedback System to Improve Lokalise Messages