top of page

Knee Photo

Improving Sensor Alignment, with user feedback

Problem

 

To complete onboarding, a user needs to connect, attach and lastly align their sensors. During the alignment flow, the user is instructed to get into the alignment position by stepping onto an alignment block (yoga block) to create flexion, before the sensor angle is recorded by the app. 

 

A study with representative users found a number of issues with the existing Sensor Alignment workflow. This was a critical workflow in ensuring the overall success of the product. 

Brief

 

Using study data, propose a new concept design to the replace the existing MS 2.0 Sensor Alignment workflow.

Analysis of the Problem

CURRENT FLOW

At the end of the alignment workflow, the app asks the user to test sensor accuracy (an assessment by the user of how accurately the sensors have been aligned). Sensors that are aligned accurately, record close to an angle of 0 degrees, when the leg is fully straight.  

(1) SENSOR ALIGNMENT - The app records the sensor angle when the user is holding the alignment position on the block 

(2) SENSOR ACCURACY CHECK SCREEN/INSTRUCTIONS - The user is then instructed to step off the block and straighten their leg. Once straight leg is confirmed, user is introduced to the avatar 

(3) SENSOR ACCURACY CHECK SCREEN/LIVE SCREEN - The avatar is then presented on a live screen, displaying an arc which mirrors the live sensor reading. This screen asks the user to confirm or deny that the avatar's leg position matches theirs

(4) ALIGNMENT COMPLETE - If user confirms that the avatar's leg matches theirs, the user completes sensor alignment.​​

Frame 626667.png

STUDY DATA - CHECK ANGLE ACCURACY SCREEN

Observations from 22 participants:

  • 5 participants wrongly confirmed that the avatar matched their leg position, when it did not 

  • 3 participants did not know or were unsure what the accuracy tolerance was

  • 1 participant did not move their leg to test sensor accuracy

Live.png
Screenshot 2024-06-03 at 18.15.24.png
Screenshot 2024-06-03 at 18.15.24.png

Study participant doing sensor alignment in study (left). Moderator performing goniometer reading (right)

PROBLEMS IDENTIFIED​​

  • When the avatar is close to zero the user usually understands what’s going on. There is however, a grey area when it is out by a bit (e.g., 10-15 degrees)

  • Users assume the avatar moving is a match and do not necessarily check straight leg match

 

ACTION

  • Revisit this screen (and wider flow) to see what could be done to improve success and confidence in selecting the correct option: ‘Yes, her leg matches’ vs. ‘No, it looks different’

 

CONSTRAINTS

  • Not all users can fully straighten their leg during accuracy check due to recent or upcoming surgery. Some will only achieve 30 degrees (leg slightly bent)

Design Concept

My proposal was to update the sensor accuracy check screens as well as remove the sensor accuracy live screen. The app checks the sensor reading upon user selecting 'I have straightened by leg'. I came up with 4 different scenarios where the app would respond differently, based on the sensor reading.

  • SCENARIO 1 - Sensor records 0-15 degrees

  • SCENARIO 2 - Sensor records 15 degrees to goniometer reading* 

  • SCENARIO 3 - Sensor records greater than goniometer angle (obviously wrong as the user in this case would be flexing, not straightening)

  • SCENARIO 4 - Sensor records hyperextension

 

*Goniometer reading is what is recorded by the clinician during their first appointment, when user is standing on the box (flexing their leg) by following the same instructions as shown in sensor alignment. 

SCENARIO 1 - SENSOR RECORDS 0-15 DEGREES​

App auto-progresses the user (completes alignment) if the sensor records 0-15 degrees. There's no need for the user to test their accuracy against the system because we can assume it’s acceptable.

WHY THIS IS AN IMPROVEMENT? 

 

The user doesn't have to test the sensor accuracy against a live screen. 

Frame 626674.png

SCENARIO 2 - SENSOR RECORDS 15 DEGREES TO GONIOMETER READING​​

  1. App displays a static image of an avatar leg resembling the sensor recording and asks the questions, ‘Does the leg in the image below match your leg when you straightened it as far as you could?’ Possible answers:

    • ‘Yes, her leg matches’. In this case the app shall progress

    • ‘No, my leg was straighter’

    • ‘No, my leg was more bent’

  2. If the user selects that is matches, they complete alignment

  3. If the user selects either that their leg is straighter or more bent than what the image shows, then the app asks them to check alignment again (the app can rule out whether or not they have checked alignment correctly i.e. stood as straight as they can)

  4. If this error occurs more than twice, the app asks the user to redo sensor alignment (the app can rule out whether or not they understood the alignment instruction workflow correctly)

  5. If this error occurs more than 5 times, the app asks them to contact Technical Support

 

 

WHY THIS IS AN IMPROVEMENT? 

 

The user would no longer see a moving arc that they don't understand and it removes any ambiguity about what the accuracy tolerance is.

 

If the image doesn't match, the app can rule out what might have gone wrong.

Frame 626671.png
Frame 626675.png

​SCENARIO 3 - SENSOR RECORDS GREATER THAN GONIOMETER ANGLE (OBVIOUSLY WRONG)

  1. There was no point in showing the user the still leg image shown in Scenario 2, so the app displays a modal with the following suggestions before asking the user to check alignment again: 

    • Make sure you are straightening your leg and not bending

    • Make sure you are standing straight

  2. If this occurs more than twice, the app asks the user to redo sensor alignment (the app can rule out whether or not they have checked alignment correctly i.e. stood as straight as they can)

  3. If this occurs more than 5 times, the app asks the user to contact their practitioner and arrange for a new goniometer reading

WHY THIS IS AN IMPROVEMENT? 

 

The app gets to the root cause more quickly. 

Frame 626672 (1).png
Frame 626676.png

​SCENARIO 4 - SENSOR RECORDS HYPEREXTENSION

  1. App gives the user 2 opportunities to redo sensor alignment (rechecking sensor accuracy wouldn’t give a different result in this scenario)

  2. If the error persists, the app directs user to their physical therapist

WHY THIS IS AN IMPROVEMENT? 

 

Again, the app is better focuses on the root cause might be and can rule out the issue related to sensor alignment before instructing them to contact their physical therapist.

Frame 626673 (1).png
Frame 626677.png

WHAT CAME NEXT

Given some constraints around time before an upcoming study, I needed to propose something with a lower developer cost to build.

 

I came up with 3 suggestions:

  1. The current implementation, minus the live arc and the inclusion of a static arrow

  2. The proposal above 

  3. Ask user to straighten, capture sensor recording and don’t say anything (problem with this last one is it won't educate us how users are getting it wrong)

Final Design

We agreed to implement concept 1. The enhancements were as follows:

  1. Straighten leg instruction screen

    • Copy updated to ask user to straighten leg as far as they can 

    • Visual updated to emphasis straightening leg 

  2. Live feedback screen

    • Removal of live arc, however the avatar would still move

    • Include arrow pointing in correct direction 

WHY THIS IS AN IMPROVEMENT? 

The app more clearly instructs the user to stand as straight as they can. 

I don't think it addresses the ambiguity around accuracy tolerance (key observation from the study) as much as the earlier proposal does but by replacing the arc with a static arrow, the app more clearly instructs that the leg should be straightened.

Screenshot 2024-04-14 at 11.23.36.png
Frame 626678.png
bottom of page