Conduct thorough user testing to ensure Organ.AI effectively meets user needs and is user-friendly.
Raise initial funding to bring Organ.AI to market and improve the efficiency of business scheduling for professionals.
I collaborated closely with the Product Director, 2 Product Managers, and 8 engineers to initiate the research process, generate ideas, and validate solutions, and successfully launch the product.
Increased the Organ.AI SUS score from 57.8 to 84.7 and improved user efficiency in booking meetings by 57%.
Successfully raised 500k SEK in venture capital and angel funding to hire developers and bring Organ.AI to market.
Scheduling meetings in a business environment can be a time-consuming and cumbersome process, involving multiple emails or phone calls to find a mutually available time. This can result in frustration, missed opportunities, and wasted time for busy professionals. Some of the main problems that we encountered while conducting a survey study were:
Challenge #1
Efficient communication is key in the business world, and scheduling a meeting is no exception. On average, it takes 8 emails to successfully schedule just one meeting, highlighting the importance of clear and timely correspondence.
Challenge #2
Scheduling a meeting can be a time-consuming task, with an average of 5 hours required to coordinate schedules and confirm details. The effort put into organizing a meeting is a clear indication of the importance placed on effective communication and productive use of time.
Challenge #3
The process of scheduling a meeting is often a back-and-forth effort, with 89% of scheduled meetings requiring multiple rounds of email correspondence. This highlights the importance of clear and timely communication to ensure successful meeting coordination.
When I first joined the team, we already had a design prototype, but it had not been validated. We were uncertain about whether the design would meet the needs of our users, and we needed to conduct interviews to gather feedback. Given that Organ.AI is an AI app that requires AI interaction and response, I decided to spend time implementing the prototype directly. This allowed us to quickly validate the solution and make any necessary improvements.
Our original product, PrimeHub.
Build
As a product designer, my first step upon joining the team was to quickly build a proof of concept for Organ.AI and validate the idea with potential users. To do this, I leveraged Xcode and some existing libraries and components to build the app rapidly. Through rapid prototyping and user testing methods, I gathered valuable insights and feedback that allowed us to iterate on the design and development of the final product, ensuring that it met the needs of our target users.
Our first MVP
Test
"During the first round of testing, we conducted a beta-loop test where we asked 12 participants to connect their own calendars and use the app for a week. We collected feedback from them and identified several key issues, including a lack of feedback, unclear instructions for getting started, and time-consuming data input due to typing.
Feedback we received along the way
Following the first round of testing, we held a workshop with the development team to prioritize the issues identified by the users and generate ideas for potential solutions. By working collaboratively and drawing on the feedback from the beta test, we were able to identify key features and design changes that would improve the user experience and address the concerns raised by our initial testers.
Our team :-D
Some ideas we came up with during the workshop
Final proposal that we decided to move towards with.
One is a fully text-based chatbot and the other is a fully button-based chatbot.
Fun fact #1
When reaching out to strangers on the internet, my approach is to be human and think about how I would engage in a conversation with them in person. I try to ask questions that will make the person feel comfortable and willing to respond. Since I have worked for several startups that had limited resources in the beginning, I have had ample experience in reaching out to strangers online. Additionally, I make sure to tailor my approach to the specific person and their interests in order to build a connection and increase the likelihood of them responding positively.
Research
We generated two potential solutions during the workshop: a text-based chatbot and a button-based chatbot. Rather than spending time on building a prototype, we decided to perform A/B testing with existing tools in the market to determine which option was more effective. We utilized a messaging library to create the text-based chatbot and Landbot.io for the button-based chatbot, both of which were integrated with our AI backend to fully support our users in scheduling meetings.
The versions we came up with
Research
Continuing the case study, to gather insights on the effectiveness of the two chatbot designs, I conducted a usability testing with 9 participants and collected their feedback. Additionally, we used the System Usability Scale (SUS) questionnaire to measure the overall usability score and timed how long it took for participants to complete tasks using each design. The data from this testing helped us to make an informed decision on which version to move forward with.
A message that I sent to a well-known data scientists with 20k+ followers.
Social media posts
Challenge #1
During the study, participants were asked to complete three tasks using both prototypes. We measured the time-to-complete and found that the text-based chatbot was 20% more efficient than the button-based one.
Challenge #2
We conducted usability testing and asked participants to fill out the System Usability Scale (SUS) survey to measure usability.
Typing Process Takes Too Much TimeCalendar Is Not Visible - not sure which time to pickAI Assistant Needs More PersonalityUser Flow Is Intuitive
Process Is Too Rigid, Takes Time to CompleteDifficult to Add New Items Less Effort on Typing
Research
Based on feedback from user testing and research, we made several key improvements to our scheduling assistant app. These included integrating button-based interactions into our text-based prototype, adding an auto-complete feature to minimize typing effort, and including more personalities in our AI.
Improvement 1 - Integrated Button-Based Interaction into Text-Based Prototype
Improvement 2 - Added Auto-complete Feature to Minimise Typing Effort
Improvement 3 - Personalised elements
Improvement 4 - Added some tips in the beginning of the conversation to hint users what to type
Research
After making improvements to the design based on feedback from previous usability testing, we conducted a second round of testing with participants using the improved chatbots. Our goal was to measure the effectiveness and efficiency of the changes we made and gather further insights to inform the final design of our AI scheduling assistant.
A message that I sent to a well-known data scientists with 20k+ followers.
The second round of usability testing resulted in an impressive SUS score of 84.7, a 20% increase from the previous prototype.
Auto Complete Reduce Time / Effort: From the interviews, we received many positive feedback on the auto complete feature, which save them a lot of time from typing. More Personality on the bot: The participants realized the personalities of the bot has improved, but they would like to have more similar personalities to bring them closer to the user.
More Button Options: The user also asks for more button options, such as providing hour options (e.g., “1 hour”, “2 hours”, etc) so that they don’t have to type the duration of the meeting manually. Hashtag is weird: The users complained that using hashtag as a trigger for recommended phrases was weird.Improvements on the design: A few participants also recommend to redesign the visuals of the app to make it more consistent and visually appealing
Design
After addressing the feedback we received, we conducted a final polish on the design to ensure consistency and improve the overall user experience. With the design now finalized, we handed it off to the development team for implementation.
Key Solutions - 1. Redesigned the UI of the app
Key Solutions - 2. Added more personality on the app, improved the tone
Key Solutions - 3. Transparency on the schedule, allow users to adjust settings easily
Key Solutions - 4. Confirmation Pop Up/Notification for extra security
The new design has helped us to reach a higher CSAT score because it allows customers to easily find the information they need and navigate the app with ease. The nav bar feature has been particularly well-received, as it gives users the ability to track translation status at east. This has led to an increase in user engagement and satisfaction.
With the new design being released, it also makes us easier to integrate more apps with the scalable design. Up to 2023, we have released 7 more apps including Typeform, Google Play, Youtube, Marketo and many more!.
With the new design being released, it also makes us easier to integrate more apps with the scalable design. Up to 2023, we have released 7 more apps including Typeform, Google Play, Youtube, Marketo and many more!.
As the services that VdoTok provides consists of complicated connection data, initially it was a big headache to present 50+ metrics in a webpage. To soothe this, I first tried to categorize different data into different levels and categories, and then come up with some ideas to visualize the data without taking up too much space.
As the end-users of this product are the developers, I worked closely with the developers in our own company during the design process. This ensured regular feedback from the developers to improve the design.