Effectiveness of E-learning in Pharmacy Education
Results
Our search strategy identified 459 records from database searches. After adjusting for duplicates, we screened 424 records, and excluded 362 because they did not assess e-learning interventions or because the participants were not pharmacists or pharmacy students. We identified a further 7 citations from reference lists and examined the remaining 69 records in detail. Of these, 17 studies met the criteria for review.
Table 3 summarizes the characteristics of pharmacy e-learning effectiveness studies. Every study assessed a different learning topic, although 3 studies included diabetes within their focus. Six studies (35%) assessed effectiveness of e-learning in pharmacists; 10 studies (59%) assessed pharmacy students (of which 1 included preregistration pharmacists), and 1 study assessed both pharmacists and pharmacy students. The number of participants in each study ranged from 17–190.
Fourteen studies (82%) delivered e-learning in more than 1 format. The most common interventions were online modules, with or without simultaneous audio. Online reading materials, synchronous and asynchronous lectures, virtual patients, compulsory discussions (with peers or teachers), online feedback systems, and multimedia vignettes were also presented. Six studies (35%) included traditional methods, such as face-to-face lectures, workshops or small-group activities, as part of a blended or hybrid approach. Five studies (29%) included a comparator group (non-Internet teaching on the same or different topics, or no training). There was significant variation in setting, including mode of delivery (continuing education, distance learning, university core and elective units, university courses, and pre-registration training), and duration of the intervention (range: 25 minutes to 1 academic year of education).
Effectiveness was measured using a variety of objective and subjective assessments, including pre-post knowledge tests, curriculum tests, mock patients, rating scales, semi-structured interviews, and written or online surveys. All objective assessments were analyzed quantitatively; while subjective assessments were analyzed qualitatively and/or quantitatively. We identified 3 effectiveness outcomes based on Kirkpatrick's hierarchy, which were reaction, learning, and behavior, with 13 studies (76%) reporting more than 1 of these outcomes. A further 19 effectiveness themes emerged through the iterative process. These were refined and presented as a thematic map of e-learning effectiveness in pharmacy education (Figure 2).
(Enlarge Image)
Figure 2.
Thematic map of e-learning effectiveness concepts in pharmacy education.
Reaction was assessed subjectively, with different instruments and scales in each study. E-learning programs were considered beneficial in improving knowledge and confidence, and stimulating interest. Courses were evaluated in terms of their functionality, which was measured as time taken to complete the course, online navigation (programs were easy to use and user-friendly), course presentation (courses were acceptably designed and integrated), and technical issues (online access, and quality of recordings). The majority of pharmacists and pharmacy students considered their e-learning course to be relevant and practical. One study reported dissatisfaction with online lectures in pharmacy students.
Learning was assessed objectively and subjectively. Of 11 studies assessing knowledge change, all reported a significant improvement in knowledge immediately after e-learning. However, the magnitude of the gain varied considerably from study to study (range 7% to 46%). Comparative studies assessing knowledge change demonstrated e-learning to be equivalent to lecture-based learning and superior to no training. One skills assessment reported significant gains (24% increase after training; adjusted compared to control), while another reported superior skills after e-learning in a posttest compared to control. Significant gains in self-perceived confidence or knowledge after e-learning varied in magnitude, depending on whether a 5- or 7-point rating scale was used. Most ratings improved by 1–2 points on each scale, representing a change between 14% and 40%.
Behavior was assessed subjectively, as direct application of knowledge or skills to the workplace, or willingness to change practice. Although intended behavior change was reported, the intention varied across studies, depending on the educational topic.
The quality of each study was rated as low (0–4), moderate (5–7), or high (8+), with a maximum score of 10 points. The mean quality for all included studies was 5.7 ( Table 4 ).
For all studies, the most common flaws in methodology were selection bias and associated poor external validity (narrow sampling frame, convenience sampling, self-selection, use of financial incentives, lack of randomization). Lack of validated tools and/or no control group limited the quality of 11 of the 17 studies. Only 4 studies reported research questions or hypotheses. Two studies had significant loss (40% or greater) at follow-up (posttests). Almost all studies included self-report (subjective) data; in uncontrolled studies, confounders affecting opinions were not identified or considered in study design or analysis. Most studies did not clearly explain analyses or fully report results of analyses (eg, significant differences claimed based on pooled data, where pooled results were not reported).
When compared to quality scores, there was no apparent relationship between the impact of e-learning interventions and quality, based on Kirkpatrick's hierarchy. Conversely, BEME strength of findings for each study showed a trend, with higher-quality studies receiving higher ratings on the BEME scale.