Introduction

Professionals must continuously gain expertise and secure accountability through lifelong learning. That is, they have to “re-structure, re-organise, and refine their representation of knowledge and procedures for efficient application to their work-a-day environment” (Feltovich et al. 2006, p. 57). Developing high-level expertise, as expected from professionals, requires deliberate practice at the workplace (Ericsson et al. 2006). The core assumptions of deliberate practice are that expert performance is acquired gradually and that effective improvement of performance requires the opportunity to find suitable training tasks that can be mastered sequentially and reflectively. A pre-requisite for deliberate practice is that the learner—a student, resident or employee—frequently receives feedback based on an adequate evaluation and analysis of how tasks at the workplace are fulfilled and where improvement is necessary (Ericsson et al. 2006; Spector 2008). Important here is that the workplace-based assessments and associated feedback require that non-standard work contexts are adequately dealt with by gathering performance data over longer periods of time in different learning contexts. To this end, electronic portfolios (E-portfolios) containing selected evidence of a learner’s performances and associated evidence (i.e., outcomes) accompanied by their comments and reflections are increasingly used to assess workplace-based learning and to support deliberate practice (Bok et al. 2012; Barrett 1998; Paulson et al. 1991; Van der Schaaf et al. 2008a).

An E-portfolio generally aims to monitor the development of a learner’s competencies (e.g., knowledge, skills and attitudes in planning and collaborating). It also aims to stimulate his or her self-assessment and reflection, which is a prerequisite to become lifelong learners (Meeus et al. 2006). An E-portfolio brings together information of several instruments and could be used next to other instruments that measure learners’ specific competencies (e.g., simulations). It is an aggregated collection of digital artefacts that represent a learner’s ideas, reflections, and evidence for learning and competencies (Butler 2006; Gunawardena et al. 2010).

For the learner, an E-portfolio can serve as a (1) reflective ‘log’ of the learner, demonstrating progress through the curriculum and (2) a repository of evidence regarding his or her performance (Van der Schaaf et al. 2008b). To compose an E-portfolio implies that learners reflect on information within the social cultural setting of the workplace at hand (Butler 2006; Dysthe and Engelsen 2011). Based on E-portfolio data, a supervisor can provide text-based and numerical feedback and assessments at specific points during a learner’s learning trajectory regardless of one’s location or work schedule. The feedback and assessments provide input for diverse forms of credentials, for instance in the form of certificates or badges that reflect mastery of learning goals by learners (e.g., Linder-vanBerschot and Summers 2015; Schmidt-Crawford et al. 2014; Ten Cate et al. 2015).

Unfortunately, the implementation of E-portfolios is often ineffective and the impact on learning is limited (Van Schaik et al. 2013). This especially tends to be the case when E-portfolios are not tailored to the characteristics (i.e., tasks and context) of the workplace. Furthermore, potential data about a learner’s performances at the workplace often remain unexploited. E-portfolios usually provide a global view of a learner’s progress and offer only limited suggestions for improvement. Since most E-portfolios consist of a rich learner assessment data set, they have the potential to include fine-grained analyses of the learners’ performances aimed at supporting a responsive adaptation for more efficient and rewarding deliberate practices. To reach E-portfolios’ full potential, we advocate that they should be enhanced with learning analytics and embedded in a curriculum environment in which feedback for and reflection of learners are central.

Learning analytics is a recent and rapidly evolving interest area aimed at optimizing the process of data measurement, data collection, data analysis and reporting of data about learners and their learning contexts for the purpose of understanding and optimizing learning (Cooper 2012; Manyika et al. 2011; Mayer-Schönberger and Cukier 2013; Siemens and Gasevic 2012). The learning analytics process is cyclical and, thus, based on on-going refinement and improvement of data processing and reporting processes during successive phases of collection, analysis and visualizing information to the learners (Baker and Yacef 2009; Campbell et al. 2006; Dron and Anderson 2009; Elias 2011). The relevance of enhancing E-portfolios with learning analytics might lie in the application of probabilistic student models that enable personalized feedback based on many multi-sorted assessment moments. A student model is often seen as a probabilistic representation of a learner’s state and educational context as set out in the E-portfolio, in combination with general pedagogical knowledge. A student model is a statistical model that translates the existing E-portfolio performance scores into the progress state of the learner in order to predict his or her (future) performance on workplace-related tasks based on the existing E-portfolio performance scores and the context in which they were collected.

Although learning analytics developments have reached educational institutions, the specific exploitation, integration and analysis of user data for assessments and feedback purposes in the workplace are still scarce. The goal of this study is to address this need by presenting an approach for developing an E-portfolio enhanced with learning analytics aimed at providing personalized assessments and feedback at the workplace. The approach is developed and used by a European Consortium of three technical and six educational institutes for professional education, with the aim to improve workplace-based assessment and feedback in the professions of medical education, veterinary education and teacher education (http://www.project-watchme.eu).

Designing an E-portfolio enhanced with learning analytics

E-portfolio environment

This section describes the context of our study using an existing E-portfolio environment to be enhanced with learning analytics. The state-of-the art in E-portfolio design assumes that assessment is a process of capturing a learner’s information to create a dataset that can be used to draw inferences about a learner’s progress and achievement (Mislevy et al. 2012). Such a dataset can have a qualitative nature, based on written reflections and narrative feedback, as well as a quantitative nature, for instance, the number of professional tasks carried out and performance scores received. In this study the basis for improving workplace-based assessment and feedback is an existing E-portfolio environment that is used internationally in workplace-based learning (http://www.epass.eu, see Fig. 1). This environment consists of a collection of submission forms, file-uploads and registrations of activities of the learners. Different types of forms can be used to provide evidence for a learner’s performance and progress, such as competency-based workplace-based assessments, multi-source feedback, reflection and appraisal.

Fig. 1
figure 1

Screenshot of an E-portfolio environment

The E-portfolio environment allows students to share their progress with supervisors and fellow students and to receive continuous feedback from various sources. The feedback normally is used as input to regular supervision and assessment meetings between the learner and the supervisor. During the meetings a learner’s development at the workplace is discussed. Learners can prepare for the meetings by writing reflective reports about their progress as shown in their portfolio.

The E-portfolio content is specific to particular job roles and underlying competencies. Therefore, first the roles and competencies need to be developed per workplace-domain (e.g., medical or teaching). Competencies are a learner’s knowledge, skills and attitudes needed for performing professional tasks (Eraut 1994; Hager et al. 1994). They can be made visible in performance or observable behavior. Next, per domain assessment tools need to be selected and developed (e.g., teacher observation forms). Then, to improve workplace-based assessment and feedback for the learners, the E-portfolio-environment can be enhanced with: (1) student models that monitor the learners’ competency development (2) a feedback module providing personalized improvement feedback (JIT module) and (3) a visualization module that visualizes the feedback in terms of the personal development over time (VIZ module).

Student model description

To feed the student model with E-portfolio data, a student model database is connected to the E-portfolio environment that contains a rich range of assessment tools. All updates of an individual learner’s internal state in the student model are based on the data in and from the E-portfolio. An internal state refers for instance to a learner’s competencies or to his or her engagement or frustration in carrying out professional tasks. The E-portfolio measures and collects the data with different kinds of submission forms (see Fig. 2 for an example), where the source of the form (type of assessment tool it is based on) and roles (e.g., learner, supervisors, patients, clients) may vary. These measurements and collected data can be used by learners for reflection purposes and by supervisors to improve their feedback to learners. It is intended to assist learners to improve their learning and to help supervisors to diagnose and intervene to problems during learners’ learning trajectories.

Fig. 2
figure 2

Screenshot of an assessment form

In order to provide meaningful feedback, the student model database should contain data about: (1) the actual internal state of each learner, (2) the learning context and (3) pedagogical knowledge in order to translate the internal state and context into meaningful just-in-time messages (JIT module) and progress visualizations (VIZ module). The student model gets more accurate when new data comes in. The pedagogical knowledge is described in knowledge fragments.

In our design, the student model used constitutes five different parts that interact with each other. First, rules are developed that decide, for instance, to translate from narrative feedback that contains words like “excellent” and “well done” into the formal statements of assessment such as “this feedback was positive with 80 % of confidence”. Second, the context-information from the portfolio (e.g., information about the workplace and the assessors) is used and translated this into an observation such as: “the assessment was done in a difficult setting with 80 % confidence.” Next, the student model translates the above statements and additional information from the portfolio into statements such as “this learner is probably (with a confidence >80 %) at the highest performance level for this competency”. Finally, output from the student model and input from the user are taken to produce a statement such as “At this point, it is probably wise (with a confidence of the order of 80 %) to advise the learner of an apparent lack of procedural skills and the responsive remedial training to be pursued”. Such a statement would then result in a timely message to the learner. The last part of the model consists of the probabilistic knowledge in all other parts of the model, combined with aggregated data from the portfolio-system. This part can contain rules on how to select the best data to represent for the given context.

Figure 3 shows an example of a student model in our design (including knowledge fragments) regarding the level of engagement. Engagement of a student is an important factor in study success and development (Wolf-Wendel et al. 2009). Oftentimes, low levels of engagement, caused by frustration of the students, lead to poor results or even drop out and burn out (Janosz et al. 2008). If the student model could use the portfolio content to detect a high level of frustration, it could help the student’s supervisor in timely and adequately response to the situation. This “frustration alert” can be based on many sources of information in the portfolio, such as a sudden drop in scores, a poor feedback-seeking behavior, inconsistency in the portfolio and so on. In our study, one of the knowledge fragments is dedicated to this frustration alert.

Fig. 3
figure 3

Knowledge fragment: frustration alert

Figure 3 shows how the level of frustration (being low or high) is estimated based on four inputs on a given moment at time (t and tprev). Tprev stands for level of engagement at previous time, which could be high or low. In this example four different input signals, derived from the portfolio, were used: (1) level of frustration at a previous moment; (2) the consistency of the portfolio (from fragment portfolioConsistency) with three levels (low, medium, high); (3) drop in score (scoreHasDropped with three levels: no drop, little drop, strong drop in score); (4) feedback seeking strategy (poor strategy or good strategy). Feedback seeking strategy is seen as poor when no feedback or only feedback from a single person, for instance the supervisor, is included in the portfolio. Feedback seeking is good when multiple stakeholders (e.g., peers, supervisors, and staff) provided feedback for at least two times. The output of this node can be presented as a message to the supervisor as a signal for lack of engagement of their learners.

Personalized feedback architecture

The overarching architecture is envisioned as a collection of loosely coupled modules that can be developed and tested relatively independently and deployed in a distributed manner. The architecture consists of the following elements: an existing E-portfolio system, external assessment tools (such as observations, a game or simulation), a student model engine, a Just-in-Time feedback module and a Visualisation module (see Fig. 4). They all have their own representational state transfer application programming interface (Rest API) that each communicates with their two peer APIs. This allows the elements to communicate with each other in order to function as a whole. For instance the JIT module consists of the “JITVIZ user interface (UI)” (frontend) and the “JITVIZ API” (backend). The “JITVIZ API” is responsible for communication between the UI components and the Student Model API. The flow of operations is as follows. First, a user uses his browser/mobile platform to include new data in his portfolio (e.g., a feedback form). This will be stored as Portfolio data. Second, the E-portfolio automatically sends this data update to the Student Model API (see the arrows between Rest API of the EPASS system and the Rest API of the student model module). These updates will be anonymized and a procedure for “privacy-enhanced” user identities is applied. Third, after completing the data analysis, the student model API sends the outcome data to the JIT and the VIZ module APIs (see the arrows between the Rest API between the student model module and the Rest API of the JIT/VIZ module). There it will be represented as a specific feedback message or progress visualization in the user interface (UI). Finally, the learner receives the feedback in written and visualized form in the display of the E-portfolio by means of the integrated JIT and VIZ widget.

Fig. 4
figure 4

ICT architecture of the E-portfolio enhanced with learning analytics

The APIs act as corridors through which information is exchanged and functionality is invoked. JIT uses sets of pre-formulated feedback messages that are selected depending on the relevant competency and the message that is identified to be most appropriate by the system. A general data language is developed that describes how E-portfolio data is translated and, thus, transported between APIs. A central authentication server (Auth) is included that arranges secure authentication between the elements.

Iterative design approach to develop an E-portfolio enhanced with learning analytics

In this section we describe the steps taken to develop the E-portfolio enhanced with learning analytics, to implement it for workplace-based feedback and assessment as part of professional education, and to evaluate its effects (e.g., usability). An iterative design approach with a central role for the user is used that matches the information system development life cycle (SDLC, e.g., Mantei and Teorey 1989; Walls et al. 1992), including a specification, design, implementation and utilization phase (Swanson 1988). The position is taken that design is an integral part of implementation with detailed decision making at each stage of the cycle and involvement of the end users (e.g., students and supervisors) (Walls et al. 1992). Our approach consists of five iterative phases in which the pedagogical knowledge about learning at the workplace is made explicit (phase 1) and translated into probabilistic student models (phase 2). Based on the collected information from phase 1 and 2, personalized improvement feedback and visualization of the personal development over time is developed (phase 3). When the prototype of the E-portfolio, including the student model and the JIT and VIZ modules, is completed, it is implemented in professional education (phase 4) where its effects are evaluated (phase 5). Based on the findings, suggestions for improvements are provided which are fed back to the preceding phases in the iterative design.

Development of competencies and assessment instruments

Key in enhancing E-portfolios with learning analytics is the alignment of the student model with a substantive learning theory, operationalized in the descriptions of tasks needed to be carried out at the workplace. Consequently, phase 1 aims at defining the workplace-related tasks and the assessment procedures that should be included in the E-portfolio to ensure a valid workplace-based assessment.

To this end, in our design approach, users from professional domains (e.g., experts and learners in medical, veterinary and teacher education) were consulted by means of Delphi-studies and seven focus groups (n participants = 78) to determine what workplace-related tasks need to be assessed and what types of evidence (e.g., products, performance scores, process descriptions) should go in the E-portfolio (Jonker et al. 2015; Ten Cate et al. 2015; Wisman-Zwarter et al. 2016). In addition, literature studies to develop workplace-based markers were carried out (Duijn et al. 2016; Krull and Leijen 2015).

In our design approach, workplace-related tasks were operationalized in terms of units of professional practice. In this way the tasks or responsibilities entrusted to be executed by an unsupervised learner once a sufficient specific competency level has been obtained. These so-called entrustable professional activities (EPAs) appeared to provide a valid and useful anchor for assessment of learners in medical education-training programs. The EPA-framework enables the development of assessment instruments applicable to entrustment decisions. If a learner is to be trusted to execute this EPA unsupervised, the supervisor (or the supervising team) must ascertain that he or she demonstrates the relevant knowledge and skills at the workplace. EPAs are independently executable within a time frame, observable and measurable in their process and outcome, and, therefore, suitable for entrustment decisions.

Table 1 shows an example of the result of a job analysis procedure yielding a list of EPAs in medical education, mapped against pre-defined domains of competencies. It reveals which domains of competencies weigh heavily in specific workplace-related tasks. For instance, in medical education one could envisage the EPA “discharging patients from the hospital” being demanding (Hauer et al. 2013). This would require, in terms of the CanMEDS competency framework domains (Frank 2005), an emphasis on the role of the Medical Expert to deliver medical discharge summaries providing an excellent overview in each case. This also includes discharging their responsibility as the Health Advocate, since the doctor must make sure that the patient is well-received after discharge to whatever new environment he or she will arrive (e.g., verifying that there is a spouse or carer at home; if not then arrangements must be made for day care or a nursing home or rehabilitation centre). Further, the role of Communicator is included, as the doctor may have to call other health care services until a suitable place is found for the patient and discuss the changing situation with the patient and his or her environment.

Table 1 Entrustable professional activities

The E-portfolio facilitates learners to send submission forms (competency requests) for each EPA to their supervisor who will assess the entrustment level; level 1: not allowed to practice the EPA; level 2: practice with full, direct supervision; level 3: practice with supervision on demand; level 4: “unsupervised” practice allowed; level 5: supervision task may be given. The learner can view the submission forms and associated entrustment level in the E-portfolio environment (see Fig. 5 for an example from a obstetrics and gynaecology training programme). Level four equates with the level that would be required for graduation.

Fig. 5
figure 5

Overview of request on EPA entrustment levels for medical education

Development of student models

Based on the output of phase 1, educational mining tools and prediction techniques were selected to develop student models. A notable challenge is to allow for aggregation and tailored feedback, based on many multi-sorted assessment moments and contexts. E-portfolio data can for instance be based on: written or electronic tests, skills tests in simulations, direct observation of procedural skills (DOPS), mini-clinical evaluation exercises (mini-CEX), case-based discussions, multisource feedback (MSF), product evaluations (Ten Cate et al. 2015). The context for every learner differs greatly in terms of: appropriate assessments, i.e., the order, amount, kind of assessment and points-of-assessment, the number, the role of the assessors. To generate valid workplace-related and personalized feedback, the student model demands a clear pedagogical and technological underpinning for, at least, the following aspects:

  1. 1.

    Prediction of entrustability: What is, at this moment, probably the current level of performance for a learner for a given EPA? This can be expressed as a probability distribution over the levels x for that EPA given the current evidence.

  2. 2.

    Selection of feedback: What is the best type of feedback action to select for a given learner at a given moment in a given context?

  3. 3.

    Selection of topic of interest: What EPA is at the moment the most of interest for a learner and supervisor?

To properly deal with the different and changing learning contexts, it is advocated to develop a student model based on a probabilistic reasoning technique such as multi-entity Bayesian network (MEBN and Laskey 2008). Building the knowledge fragments that constitute the MEBN requires that as much pedagogical knowledge about learning at the workplace has to be made explicit and translated into probabilistic terms. The use of MEBN in terms of feedback and assessment based on the E-portfolio data implies that the use of different types of input (e.g., scores on multi-source feedback, written feedback by supervisors), each with a different impact, together point to the ‘true’ performance level of the learner. Below, we describe the subsequent elements of the student model as depicted conceptually in Fig. 6, together with the flow of information through the model. For example, when a learner for example receives a level 1 (below level) assessment decision for EPA 1 (i.e., he or she is not allowed to practice EPA 1), this information is processed by the student model as follows (see Fig. 6):

Fig. 6
figure 6

Functional representation of the student model

Assessment outcome here, the information enters that the learner scores ‘below level’ for EPA ‘Consulting new ambulatory patients’. To this numerical information regarding the score, narrative information can be added (the block ‘Narratives’ in Fig. 6). This is provided by the assessor in the textual fields of the assessment forms. These texts are translated into a positive or negative value (a sentiment). This translation is done automatically by identifying parts of text that relate to student performance. It identifies how strongly the text indicates a positive, neutral or negative assessment. An example is the wording: “Mary is an excellent communicator”. In order to get the system to identify “excellent communicator” as a positive sentiment, first the sentence is cut into phrases that tell us that excellent relates to communicator and then we identify it as a positive qualifier. So, the system splits sentiments into correct sentence fragments. Next a dictionary of words and qualifiers is used to find out whether’ excellent communicator’ expresses a sentiment, whether this is positive or negative and to which extent we think it is positive or negative as mentioned earlier. Since the method is based on a dictionary approach, this can be easily translated to other languages.

EPA level In this block the information from existing EPA level decisions is collected. In our example, the EPA 1 competency level is ‘below level’, meaning insufficient, since the learner is not allowed to carry out EPA 1, even with supervision.

Domain competency level In this block, the EPA level information and the assessment score information is used to assess the level for the separate competency domains. Since EPA 1 relates to four of the competency domains, those underlying competencies (see Table 1) can consequently also not be rated as sufficient.

Interest selector This block represents the selection of the information that is relevant to visualize at the desired moment. In the example of Fig. 6, EPA 1 is selected for providing personalized feedback to the learner and this information is sent to the VIZ module.

Feedback decision The last block in Fig. 6 represents the decision on what feedback to send. In our example, the JIT module will generate a personalized feedback message to inform the learner about the insufficient score for EPA 1 and provides improvement feedback (how to reach level 2). The VIZ module incorporates the score in the timeline and notifies the learner whether there is a negative trend or an outlier.

Development of prototypes of the personalized feedback modules

In this phase, the JIT and VIZ modules—representing the output of the student model—are developed. Together they form personalized feedback to the learner (see Fig. 7). Based on the pedagogical knowledge (phase 1) and the student model (phase 2), three stakeholder meetings were organized in which four stakeholder groups were defined: technical, managerial, students and supervisors. The students and supervisors (N = 46) were the main target group. The stakeholders were users of the system. They worked in one of the six institutes for professional education that took part in the project. The stakeholders participated voluntarily. They gathered for about 2 h in a meeting room in one of the institutes. The developers and researchers of the project led the meetings. They discussed how to: (1) formulate the personalized feedback messages (JIT module) and (2) visualize the learners’ progress through the curriculum (VIZ module). Main outcomes were that users imagined feedback messages as provided by the JIT module in the categories of notifications, content feedback and longitudinal feedback. Further, they expected quantitative (scores) and qualitative (narrative) feedback. They preferred a simple overview of students’ progress in performance levels.

Fig. 7
figure 7

Screenshot E-portfolio with prototype JIT feedback

The personalized feedback is provided at two granularity levels. At first the student receives the aggregated feedback (see Table 2; Fig. 7). This type of feedback informs the student about the comparison of the assessments and the associated entrustment level to previous assessments of the students (improvement, positive/negative and trend), assessments their peers received (cohort) and the number of required assessments (gaps). Furthermore, the messages indicate whether the supervisor added narrative feedback in the submission form.

Table 2 Examples of aggregated personalized feedback

If a student requests more specific personalized feedback, he or she can interact with the JIT module and VIZ module widget that are both integrated in the E-portfolio environment. By clicking on the JIT feedback message or on the interactive graph, extra detailed information in each of the categories is provided (see Table 3; Fig. 8). That means that the comparison is made explicit (e.g., performance level increased from 2 to 3) and concrete suggestions for improvement (e.g., “In order to reach performance level 4 you should…”) and supervisor feedback are provided. The interactive graph (Fig. 8) represents a timeline that shows the student how he or she has developed over time. The coloured trend lines stand for separate EPAs and the levels on which the student developed over time. The design allows for close monitoring of students’ progress by visualizing their performance on the EPAs by means of graphs and figures as well as narrative feedback. In this way it provides an overview of students’ strengths and points for further development, which can be used for self-assessment and peer assessment and discussion about students’ portfolio. Further, compiling the portfolio (selecting materials as input for the portfolio) already demands students’ reflection.

Table 3 Examples of detailed personalized feedback
Fig. 8
figure 8

Interactive graph with timeline regarding students’ progress

A student usually starts with basic skills and therefore low levels, then improves skills and works towards entrustment. The timeline visualisation provides this overview of time-based development, using assessment forms as benchmarks. Information in the timeline is given on two levels: EPA level (as shown in Table 1) and the underlying (more specified) competency level. The EPA level is visible in the first view and the competency level is accessible by clicking on an EPA graph or on a legend item. Graphs for each EPA or competency are created by placing “assessment dots” on the date for the assessment (x-axis) and aggregated score of the assessment (y-axis). The “assessment dots” are connected by lines. Inside the visualization, the user can: (1) hover on an assessment dot to see a summary of the assessment form in a pop-up box, (2) click on an assessment dot to see a narrative message form the assessment of that EPA in a pop-up box (JIT feedback), (3) go to a particular assessment form by clicking on the relevant button inside the pop-up box.

Implementation of the E-portfolio enhanced with learning analytics

For the implementation of the learning analytics enhanced E-portfolio, a four-staged prototyping strategy was used. Stage 1 consisted of implementing a mock-up model to verify whether the E-portfolio-environment was able to work with student models. In this stage, both the translation of pedagogical knowledge into a technological language and the technical aspects of working with MEBNs were investigated. The mock-up model was used as a walk-through device to be discussed with the experts and students per domain (medical, veterinary, teaching). This provided an early opportunity to assess the feasibility of the integration of the MEBNs so as to ensure any issues are addressed in a timely manner in order to minimise risks through contingency management. One of the things discussed was how the student model should cope with the possibility that the EPAs are interrelated. In the medical curriculum, the students have different types of clerkships in which some EPAs are more important than others or are not assessed at all. Consequently the entrustment level scores received for EPAs that were assessed in clerkship 1 could affect the entrustment score level for an EPA assessed in clerkship 2. Before proceeding to stage 2, the mock-up model was revised and discussed during another focus group meeting.

Stage 2 regarded the implementation of a prototype student model including the input of a small coherent set of assessment instruments that are used in the context of a few EPAs. This student model was expert-based in the sense that it used the relationships and probabilities that were retrieved from the pedagogic knowledge instead of actual user data. The model was used to explore the best way to model the assessment instruments, EPAs, entrustment levels, domain competencies, and personalized feedback. In our study, this stage revealed the amount of effort and data needed to create complete student models per domain. It also indicated how a scalable and generic approach to student modelling could be developed.

Stage 3 concerned extending the student model resulting from the second stage to a complete one to cover an entire curriculum in professional education. This model was tuned using data collected by the assessment instruments in the participating institutions. An integrity and stability test was carried out successfully. Further, a detailed manual testing of all the configurations provided by the student model was carried out. Output messages and log messages have been checked for any errors, missed configurations and incompatibilities between the submission and the output or the Bayesian model. It was concluded that the modules were stable, although few specific issues needed to be targeted. For example, a case has been identified where the student model did not return the feedback for a specific EPA. Further, a small issue occurred for the E-portfolio-submissions that take place in the same day due to the fact that the E-portfolio did not timestamp its submissions.

Stage 4 consisted of integrating the student model into the E-portfolio environment and to connect it to the personalized JIT and VIZ feedback modules. To gain insight into how supervisors perceived the preliminary design of the E-portfolio with the personalized feedback modules, an online semi-structured survey, in two rounds, was used. Eight experts were asked to answer questions about the personalized feedback modules. The experts were head of department or main supervisors of the six institutes for professional education involved in the project. The survey included digital sketches and mock-ups of the feedback modules. Participants were asked to assess the usefulness of several types of feedback and to provide suggestions for improvement (e.g., “Do you consider the chosen just-in-time feedback messages clear for the learners?”). It took between 30 and 60 min to fill out a questionnaire. The results were analyzed descriptively. Video meetings were held with each educational institute to clarify and confirm researchers’ understanding of the survey results.

Findings indicate that the experts reached an agreement that the supervisors considered the VIZ module more useful than the JIT module feedback. Although the experts preferred the VIZ module, they also provided some suggestions for improvement:

  • “It is not always clear when the elements in the VIZ module are interactive. Please make more clear when an element is clickable and the user can get more feedback.” (Expert 1)

  • “This is great way to visualize the development. Please keep possible color blindness of participants in mind. Maybe add the option to change the colors into different types of lines (such as dots, and stripes). This would also be a nice option in case the graph is printed in black and white.” (Expert 3)

  • “With colors, it could be possible to highlight—with dotted lines—what a learner has to achieve. In that way, it would be immediately clear what EPAs the learner still needs to work on.” (Expert 6)

Since the experts were more critical towards the design of the JIT module, the JIT module was modified (see Table 4). In the second round, only the learner just-in-time feedback was surveyed in an interactive clickable mock-up E-portfolio-environment. All the experts strongly agreed with the design of the personalized feedback just-in-time messages in the second round of the survey. More specifically, all experts were enthusiastic about the personalized feedback mock-up for the learners and supervisors. The experts indicated that this type of feedback might help to clarify what good performance is for learners by providing information about their development with respect to EPAs and domain competencies.

Table 4 Critical remarks and re-design personalized feedback module

Evaluation of the E-portfolio enhanced with learning analytics

After the implementation and testing of the E-portfolio enhanced with learning analytics, its perceived usefulness was evaluated among all 202 students and 141 supervisors from the six institutes of professional education who used the system. In total 121 students (response rate 60 %) and 30 supervisors (response rate 21 %) voluntary participated in the evaluation. From medical education 32 students and 7 supervisors responded. From veterinary education 38 students and 7 supervisors participated. From teacher education 51 students and 16 supervisors responded. The enhanced E-portfolio environment was evaluated by means of usability questionnaire based on Venkatesh et al. (2003). We aimed to gain insight into the perceived: (1) ease of use (e.g., “Use of the E-portfolio environment requires lots of guidance”) and (2) usefulness of the personalized feedback messages (e.g., “The feedback the E-portfolio environment provides me is instructive to help me develop my EPAs”). All questions were rated on a 5-point Likert scale ranging from “1 = fully disagree” to “5 = fully agree”. The students and supervisors were requested to fill in the questionnaire halfway and at the end of the first clerkship of students’ professional training (timespan of a few months).

The results show that in general students and supervisors found the E-portfolio enhanced with learning-analytics useful. They favoured the overview of forms, the decrease in use of paper and the possibility to monitor progress in professional development. Students suggested improvements in terms of a notification system when certain forms were lacking and an automatic email option that sends requests to supervisors to validate forms. Students were positive about supervisors’ narrative feedback and the options of visualisation (they preferred more and better visualisations). Regarding the automated feedback students felt a lack of personal response and in some cases the automated feedback was perceived to be too general (i.e., the same feedback could be interpreted in different ways by students). The results show the need for adjustments to improve the users’ experience of the feedback messages, the navigation and the graphic design.

Discussion

Workplace-based feedback and assessment should be personalized to meet learners’ needs and should enable performance data to be gathered in different learning contexts over a longer period of time (Tozman 2012). Unfortunately gathering performance data and using it for feedback and reflection purposes is not always self-evident at the workplace, which hampers learners’ professional development (Albanese 1999; Massie and Ali 2015). To address this, the quality of workplace-based feedback and assessment needs to be improved. In this paper we, therefore, presented the development and implementation of an E-portfolio environment for workplace-based assessment and feedback. We used a 5-phased iterative design approach. An existing E-portfolio environment was enhanced with learning analytics to improve the quality of workplace-based feedback and assessment for students and their supervisors in the professional domains of medical, veterinary and teacher education. Based on the pedagogical knowledge (phase 1) from the domain experts and state-of-the art probabilistic reasoning techniques (MEBN and Laskey 2008), a student model (phase 2) was developed that analyses the E-portfolio data and sends suggestions for personalized feedback (phase 3) to the JIT (i.e., just-in-time improvement feedback) and VIZ module (i.e., visualization of a learner’s progress over time). Thereafter, the prototype of the E-portfolio environment, including the student model and JIT and VIZ modules, was implemented in professional education-training programmes (phase 4) where its usefulness was evaluated (phase 5). Based on the findings, suggestions for improvements are provided, which are fed back to the preceding phases in the iterative design. Our final system will be the result of ongoing revision rounds with stakeholders from the technical and educational institutes. As shown in other projects, it is possible to enhance feedback and assessment with greater flexibility, interaction and immediacy (Warren et al. 2014) and students and supervisors welcome the integration of mobile devices in workplace-based learning (e.g., Doyle et al. 2014).

The iterative design approach with development (phase 1–3), implementation (phase 4) and evaluation (phase 5) phases was shown to be useful, as is the case in other design projects (Van den Akker et al. 2012). However, at a more specific level, there are several practical, technological and ethical challenges that need to be addressed.

Firstly, the quality of feedback and assessment at the workplace solely improves if the design of the E-portfolio environment remains responsive to the requirements of the learners, supervisors and other experts. The so-called, human factor when designing E-portfolio environments enhanced with Learning analytics is of the utmost importance (Berns 2004; Greller and Drachsler 2012). Only a close cooperation between the community of practitioners and technical partners will ensure a sound implementation of an easy accessible E-portfolio environment for data collection and Learning analytics driven personalized feedback modules (JIT and VIZ) to inform learners and supervisors about the learner’s progress. In this regard, a limitation of our study is the evaluation of perceived ease of use of the personalized just-in-time feedback messages within an existing E-portfolio environment. It is likely that the E-portfolio environment influences users’ perceptions regarding the feedback messages. We could not verify whether that was the case in our study. This also counts for the way the E-portfolio environment is embedded in a curriculum, since this can strongly impact users’ perception of feedback. The way the E-portfolio environment is embedded in the curriculum can differ per educational institute. As we did not control for this in the study, this can be seen as a limitation.

From a practical perspective, more attention needs to be paid to the pedagogical implementation of the E-portfolio in our study. For instance, Barbera (2009) found that E-portfolios had impact on students’ learning when they shared their E-portfolio with peers, and Lyons (1998) and Hamp-Lyons and Condon (2000) show the importance of reflection, conversation, and debate about portfolios’ content among peers to improve professional development. A proper integration of E-portfolios in the curriculum as a means of reflection, feedback and assessment can stimulate professional growth (Bartell et al. 1998; Mansvelder-Longayroux et al. 2007).

Secondly, the use of learning analytics for workplace-based feedback and assessments demands new technological solutions. One of the present challenges in the deployment of learning analytics in workplace-based feedback and assessment is to allow for aggregation and personalized feedback based on many multi-sorted assessment moments. In our approach, the student model is fed with numerical data (e.g., entrustment scores) as well as narrative data (e.g., written comments) from the E-portfolio environment. Combining both types of data is challenging since this will require the use of text mining techniques and natural language processing to analyse narrative data in order to resolve and encode them into probabilistic propositions to inform the student model. Further research in this area is needed, since new techniques, such as the application of ontologies or sentiment analysis for narrative feedback, are becoming available but insight in their usefulness is still scarce.

Lastly, when using and examining the effects of E-portfolio-environments enhanced with Learning analytics, one should address the challenge of ensuring user privacy (Greller and Drachsler 2012; Selwyn 2015). Due to the nature of the multi-sourced E-portfolio-dataset, at least, two ethical issues arise. If the learner is “owner” of the E-portfolio-data, he or she has to grant access to other persons in order to let them see or use the data for feedback and assessment purposes and research purposes. Although this can be addressed by requesting the learner, supervisor, and researcher to sign a so-called, informed consent form it still has to be formalized by making proper forms and describing a sound procedure. Furthermore, the E-portfolio data has to be anonymized before analysing it in the student model and representing the output (i.e., personalized feedback) as just-in-time suggestions for improvement and visualizations of a learner’s progress over time. This implies that when designing the personalized feedback it is important to ensure that the provided feedback is aligned with the ethics of the field. In addition, the personalized feedback should also be carefully aligned with the needs and characteristics of the users. When the human factor is not properly taken into account, the provided personalized feedback may be incorrect or easily misinterpreted leading to an undesirable and unethical impact on the user.

In this study, we presented a design for personalized feedback in a learning analytics driven E-portfolio system. One of the main benefits of personalized feedback is the opportunity it provides to learners to evaluate and monitor their progress and performance. Although this sounds promising, future research should reveal whether E-portfolios enhanced with learning analytics actually can stimulate learning at the workplace. Based on our preliminary experiences and findings, several research topics are worthwhile for further investigation.

First, learners’ preferences for types of personalized feedback can be researched in multiple ways. Besides using questionnaires, log-file data (e.g., mouse clicks representing visits of the JIT and VIZ-modules) can be used to examine whether a specific module or a combination of both modules is most beneficial for learning.

Second, using learning analytics techniques, such as probabilistic student modules, often require a lot of data points to generate reliable feedback messages. Since programs for professional education vary in duration, the possibilities of gathering data will differ. For example, in the Netherlands, teacher education programs for university graduates with a masters’ degree take one year, while Dutch undergraduate medical education take 6 years. Consequently, programs for professional education differ in the amount and type of data that can be gathered per student. Since many data-points are needed, it could be the case that some types of work domains are more amenable than others to the use of E-portfolios enhanced with learning analytics. Future research could focus on examining the consequences of the use of E-portfolios enhanced with learning analytics in different kinds of curricula.

Lastly, further research could focus on what kind of stakeholders (e.g., learners, peers, and supervisors) could benefit most from E-portfolios that are enhanced with learning analytics. In the current project learners as well as supervisors have access to all feedback messages. In order to stimulate a learner’s self-assessment and reflection it might be interesting to examine whether and in what cases a supervisor should have maximum access.