close

Prototyping, testing, and scaling solutions

Once the MDT have agreed upon the preliminary design and function, the next stage is to generate and test a prototype version of the product which can then be piloted and scaled for production once improvements have been made.

Prototyping design and function

Prototyping can be referred to as Alpha testing [1] and is a low-cost experimental activity to develop, test, refine and re-test potential solutions. 

Low-fidelity prototypes are:

  • Inexpensive and simple visualisations of the proposed product. 
  • Paper based sketches or diagrams showing basic design and functions to show end-users what the MDT are thinking. 
  • Easy to change, with negligible costs and easy for users to understand. 
  • Key to facilitating co-design of the product as users can begin to see and feel what is being proposed and can offer opinions, good and bad, to improve the design. 

High-fidelity prototypes are:

  • More complex than low-fidelity prototypes and are 'click-through' or coded versions. 
  • Difficult to change or refine often with design or development costs involved. However, these can offer users realistic experiences of how the product works during the testing phase. 

Once users accept a prototype design (and stakeholders), the prototype must be evaluated with end-users in the environment where it will be used for the purpose it was created.

Testing the prototype

Prototype testing aims to generate valuable and meaningful feedback to reshape the design, making it more user-friendly [2] and fit for purpose. 

User-based usability evaluation methods are undertaken during this phase with the specific objective of identifying the following: 

  • Errors or issues within the interface between the user and the technology – whether this is an App, website, platform, portal or system. 
  • Pinch and pain points impact how users interact with the product. 
  • Effectiveness, efficiency and levels of satisfaction of use by the intended users. 
  • Areas for refinement or additional features are needed to improve usability, user experience or accessibility relating to visuals, function, navigation, workflows, data collection, information design or architecture. 

These activities are not co-designed related. However, participants involved in design can be invited to participate to provide real-world feedback on the use of the prototype, which is valuable to inform further re-designs or optimisation of the product. 

In some situations, end-users may not be available to be involved in testing. This should not stop MDT from evaluating its prototypes, instead, expert or automated testing are options. 

  • Expert-based testing can involve external professionals skilled in undertaking evaluations based on industry standards or stakeholder or end-user's expected interactions. Methods include heuristic analysis, cognitive walkthroughs and expert reviews. 
  • Automated testing uses software to identify flaws or weaknesses in design or function. MDT members can analyse data using software programs focussing on performance indicators such as speed, stability, and reliability under high user loads and with available bandwidth

Research activities and evaluations focus on generating narrative descriptions, performance data, data analytics and end-user satisfaction levels. [3] 

Once prototyping is completed, the MDT build a 'final', 'production', or a 'minimal viable product (MVP)' version for implementation. The product can be rolled out on a small scale or fully deployed, depending on the product, environment, budget and levels of confidence with the product design.

Piloting and small-scale testing

Once evaluated and refined, a pilot or small-scale study can be undertaken to identify further issues with usability. This is important as piloting is undertaken with the product fully integrated into systems, within workflows and is evaluated by the end-users for the intended purpose.

Small scale testing and piloting can be referred to as 'Beta testing' or 'User Acceptance Testing' where versions are released to restricted to a set of users or within a secure environment to generate feedback for improving the product before release. [1]

Functional, production versions or MVPs are typically piloted, often developed with core features (including functional front-end and back-end data capture) and reiterated interfaces or functions from the tested prototype.

  • Key pilot data generated from evaluations include real-world efficacy and effectiveness of the prototype/MVP.
  • Measures of performance (speed, stability, reliability and data capture) and behaviour include those assessing usability, user experience, user interactions, impacts on workflows and levels of burden for staff or older people.

User-based feedback generated from small-scale testing assesses end-user perceptions, and activities can include questionnaires, interviews, workshops and focus groups. This information is complemented by analysis of performance data on success and failure of use to identify technical issues [4] and areas for optimisation to improve user experience. [5]

Scaling for production

The 'production' version is a refined or reiterated version of the prototype/MVP tailored for use within a specific environment by a defined group of users.

Implementation, rollout, or deployment all refer to the final development phase. This is also known as the live stage, [1] where the product is released. Feedback and data are still continuously captured from user interactions with the final version of the product. This information can potentially shape further improvements to the product across the implementation phase.

Due to diversity in care settings, workflows, workforce and processes, scaling for widespread deployment and implementation within the aged care sector requires products to be adapted for tailored experiences. As a result, further rounds of testing in situ may be required to generate further feedback to refine the product for each facility, service or organisation.

One round of testing does not guarantee the product's utility across all settings even if facilities or organisations share core operations or have common requirements to meet standards or policies compliance.

  1. Australian Digital Transformation Agency. Digital Service Standard [Internet]. Canberra:AU: Australian Government; 2024 [updated July 25 2024; cited 2024 August 16]. Available from: https://www.digital.gov.au/policy/digital-experience/digital-service-standard.
  2. Van Velsen L, Wentzel J, Van Gemert-Pijnen JE. Designing eHealth that matters via a multidisciplinary requirements development approach. JMIR Res Protoc. 2013;2(1):e21. doi: 10.2196/resprot.2547.
  3. Harst L, Wollschlaeger B, Birnstein J, Fuchs T, Timpel P. Evaluation is key: Providing appropriate evaluation measures for participatory and user-centred design processes of healthcare it. Int J Integr Care. 2021;21(2):24.
  4. Vermeulen J, Verwey R, Hochstenbach LMJ, van der Weegen S, Man YP, de Witte LP. Experiences of multidisciplinary development team members during user-centered design of telecare products and services: A qualitative study. J Med Internet Res. 2014;16(5):e124.
  5. Mummah SA, Robinson TN, King AC, Gardner CD, Sutton S. IDEAS (Integrate, Design, Assess, and Share): A framework and toolkit of strategies for the development of more effective digital interventions to change health behavior. J Med Internet Res. 2016;18(12):e317.
Spacing Top
0
Spacing Bottom
0