DOI QR코드

DOI QR Code

Development and Validation of a Digital Literacy Scale in the Artificial Intelligence Era for College Students

  • Ha Sung Hwang (Department of Media & Communication, Dongguk University) ;
  • Liu Cun Zhu (Department of Media & Communication, Dongguk University) ;
  • Qin Cui (Department of Media & Communication, Dongguk University)
  • Received : 2023.03.03
  • Accepted : 2023.07.09
  • Published : 2023.08.31

Abstract

This study developed digital literacy instruments and tested their effectiveness on college students' perceptions of AI technologies. In creating a new digital literacy test tool, we reviewed the concept and scale of digital literacy based on previous studies that identified the characteristics and measurement of AI literacy. We developed 23 preliminary questions for our research instrument and used a quantitative approach to survey 318 undergraduates. After conducting exploratory and confirmatory factor analysis, we found that digital literacy in the age of AI had four ability sub-factors: critical understanding, artificial intelligence social impact recognition, artificial intelligence technology utilization, and ethical behavior. Then we tested the sub-factors' predictive powers on the perception of AI's usefulness and ease of use. The regression result shows that the most common powerful predictor of the usefulness and ease of use of AI technology was the ability to use AI technology. This finding implies that for college students, the ability to use various tools based on AI technology is an essential competency in the AI era.

Keywords

1. Introduction

With the development of information and communication technology (ICT) and the emergence of various communication media, people have come to accept and consume information in a new media ecosystem. In particular, the Fourth Industrial Revolution, which researchers have discussed since 2016, is leading modern society in digital technology that assists humans with artificial intelligence (AI) technology that resembles humans. As a result, AI is now becoming an important technology that affects many aspects of life, and its widespread use is changing how we interact with the environment in everyday life. Businesses have applied AI to various information and communication technologies, such as Twitter, Facebook, Instagram, Kakao Talk, YouTube, and Netflix, fundamentally changing the media ecosystem, including the production and distribution of messages. For example, AI recommendation systems widely used on platforms such as YouTube and Netflix provide personalized content, with AI speakers directly ‘conversing’ with people, enabling users to obtain desired information verbally rather than through textual search. In this way, as AI technologies such as various algorithm-based recommendation services advance, concerns and worries about personal information hacking, illegal/harmful information distribution, and dissemination of false information surface in the AI era.

Although we encounter heavy daily exposure to digital technology, that does not mean we are digitally skilled. Due to that, scholars insist that digital technology literacy (or competency) is necessary for the AI era. As digital literacy is an umbrella term for many different technologies, we need to design specific case studies, such as digital literacy for AI or digital literacy for drone usage [1]. Nonetheless, there is no instrument to measure digital literacy skills to apply AI technology. Therefore, this study builds context-specific measurement tools to assess college students’ digital literacy skills and competencies in the AI environment.

Paul Gilster first used the term digital literacy in 1997, defining it as “the ability to understand information—and more importantly—to evaluate and integrate information in different formats that the computer can deliver” (p. 6) [2]. However, the definition evolved by adding new technologies, and the knowledge of academic and skills demands of the workforce (or education) changed [3]. In a review of the academic literature on digital literacy, Reynolds (2008) notes that definitions of digital literacy are often skills-based and grounded in the uses of particular technologies [4]. Therefore, it can be concerning due to the instrument not being in accord with AI circumstances and causing inaccurate results. In addition, existing digital literacy instruments are mostly only adopted and re-translated, so researchers have not performed construct and content validity tests during implementation [5].

Nowadays, college students are primarily born into the computer era, and we refer to them as the digital native generation who grew up in the digital environment. “Digital natives” is the term Marc Prensky (2001) coined to define the students in 2001 as the first generation who “spent their entire lives surrounded by and using” digital technologies [6]. He said, “Our students today are all ‘native speakers’ of the digital language of computers, video games, and the internet” (p. 2). However, even though they grew up with digital technology, there is the possibility that they do not know everything about it [7]. College students’ daily activities include reading online course materials, posting on discussion boards, and creating and managing projects and assignments (p. 48) [8]. Some students are readily equipped with digital skills when they start university, but there are still those who need support. Scholars believe that literacy for digital technologies is critical in taking on a more participatory role in society and hence improving quality of life [9].

Thus, this study develops and tests the effectiveness of digital literacy instruments on college students’ perceptions of AI technology. We explore college students’ awareness and possession of digital literacy skills and examine the effectiveness of measurement scales. Researchers can use this study’s findings to guide future investigations in AI literacy.

2. Literature review

2.1 Review of the Concept and Measurement of Digital Literacy

In light of the rapid development of digital technology, people need to use various technical, cognitive, and sociological skills to perform tasks and solve problems in the digital era. These skills are known as “digital literacy” in the academic field [10]. Although the initial concept of literacy goes beyond a simple ability to read and write, skill-based literacies have complemented literacy, i.e., ideas developed to deal with information of increasing complexity and developing technologies [11].

In 2011, UNESCO described digital literacy as a set of basic skills required for working with digital media, information processing, and retrieval. Later, the American Library Association (ALA) Digital Literacy Task Force (2013) defined digital literacy as the ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills [12]. In addition, Ferrari et al. (2012) documented that:

Digital literacy is the skills required to achieve digital competence. It is underpinned by basic skills in ICT and the use of computers to retrieve, assess, store, produce, present and exchange information and to communicate and participate in collaborative networks via the Internet (p. 8) [13].

These definitions imply that digital literacy involves more than the ability to use software or operate a digital device. Rather it involves various complex cognitive, sociological, and emotional skills [14]. It allows people to achieve other valued outputs, especially in the modern digital economy.

Meanwhile, as digital literacy is a multi-disciplinary concept and skills for digital literacy vary among scholars, many efforts exist to explore and identify components included in the idea. For example, Larson (2010) developed digital literacy measurement tools with sub-dimensions such as computer, software (windows, word processing, presentation), Internet, and web process operating abilities, information retrieval ability, and time and task management ability [15]. This approach emphasizes the importance of the ability to understand and use the terminology of ICT technology in digital literacy ability.

In Korea, many scholars have focused on exploring sub-dimensions of digital literacy. For example, Kim (2004) explained digital literacy in three subfields: technology, utilization, and mind [16]. Other studies added sociocultural elements and divided digital literacy into the technological environment, information knowledge, and sociocultural literacy [16,17]. Focusing on digital media usage, Ahn and Seo (2014) extracted eight detailed factors: media use technology, media property understanding, media information evaluation, media use ethics, expression ability, expression technology, communication and sharing, and citizenship [18]. Recently, Jung, Kim, and Hwang (2021) developed a measurement of digital literacy and found five ability sub-factors: individual creativity and empathy, software/app use, smart device use, privacy protection, and ethical behavior [19].

The measurement tools of digital literacy discussed in previous studies started with the ability to use computers and software in the early days and expanded to the recent cognitive domain. Although these studies shared that measurement tools evaluated the ability to use digital devices or use and process information, these measurement tools have limitations as we enter the age of AI. Therefore, we argue that it is necessary to have the ability to apply technology and information and objectively judge the social impact on human society to understand digital literacy in the AI age.

2.2 The new concept of Digital Literacy for AI

The advancement of various algorithm-based AI technologies and the activation of online communication has resulted in several concerns and worries about personal information hacking, illegal/harmful information distribution, and dissemination of false information. At this point, the need for research on new digital literacy in the era of AI is emerging, but so far, there is a lack of empirical findings to measure new AI digital literacy. Previous studies relied on qualitative research related to AI. For instance, Choi (2018) proposed a framework for digital competencies, which are core competencies required in the Fourth Industrial Revolution [20]. These competencies include understanding the digital society and digital citizenship, communication and cooperation using digital technology, critical thinking ability and information literacy, computing thinking and problem-solving, and creative convergence thinking and content creation.

Hou and Wang (2020) explained how we adopt and use AI technology and the skills required. The first stage is the beginner stage, which means the ability to adapt well to the environment changed by AI by recognizing the convenience and usefulness of AI and looking at AI positively. The second level refers to using AI effectively to solve everyday problems. The last stage refers to the ability to develop AI tools or products through programming based on professional knowledge and creative thinking about AI [21]. In addition, Lee (2020) reviewed the concept of AI and the development of AI technology by looking at life, industry and job changes, and social issues caused by the development of AI technology. Through this, he suggested AI basic knowledge, utilization and development ability, and AI ethics and values as necessary AI skills [22]. Moreover, Wang (2018) explained AI literacy by expanding the concept of information literacy and discussing it from AI knowledge, ability, emotion, and ethical perspectives [23].

In other research, Lee and Park (2021) studied media literacy based on AI technology from the perspective of a liberal arts education [24]. The study found that AI literacy education had a positive effect on the students’ perception of the AI ethics, verifying that it offered the positive experience of using AI technology in university liberal arts classes for the non-majors. They emphasized that the ability to critically understand AI is essential among college students. In other words, it is necessary to include critical thinking ability in AI society as a new literacy component. By reviewing existing research and synthesizing a variety of interdisciplinary literature into a set of core competencies of AI literacy, recently, Long and Magerko (2020) defined AI literacy as “a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace” (p. 598) [25].

From the above, we can see that one’s AI knowledge, ability to use AI, critical thinking and AI ethical values are vital in defining the concept of literacy in the AI age and developing measurement tools. Although many technologies are increasingly integrating AI, public understanding of these technologies is often limited. Therefore, there is a need for additional research investigating what competencies or abilities users need to interact with and critically evaluate AI technologies. Until now, most studies on AI literacy have remained at the level of presenting a model for the educational purpose of AI literacy and revealing suggestions. However, there is no standardized scale for this measurement. Therefore, this study supplements the limitations of previous studies and proposes the following research question.

RQ 1: What are the components of the digital literacy scale in AI technology?

2.3 Relationships between Digital Literacy and AI Technology Usefulness and Ease of Use

To be active in the modern digital environment, humans must utilize new technologies and media. Traditionally, researchers have defined literacy as literacy, but new technology and the Internet have changed the form of literacy, transforming it into digital literacy in a new digital platform. Therefore, digital literacy plays a vital role in adopting new technologies. Some scholars include digital literacy as a new antecedent of the Technology Acceptance Model (TAM), which focuses on the perceived usefulness and ease of use of new technology [26]. While usability is a concept based on the effect of use, ease of use is related to how to use technology. For instance, Davis (1985; 1989) [27,28], who proposed TAM, defined usefulness as “the degree to which an individual believes that the use of a specific system can improve his/her job performance” (p. 109). He also defined ease of use as “the degree to which an individual feels that there is no burden in putting in physical and mental effort in using a specific system” and as the effort required to use a tool (p. 109) [27].

In many studies using TAM and verifying the basic technology acceptance model, researchers have attempted to find specific antecedent variables that affect usefulness and ease of use [29]. Given that digital literacy is a survival skill in the digital age essential for online activities, it seems closely related to the perception of usefulness and ease of use. For instance, the greater the ability to understand and utilize new digital technologies or services, the more positive the perception of using digital technologies [30]. In addition, Lim and Kim (2021) showed that health literacy, which means the ability to understand and process health information or health services, has a statistically significant effect on smartphone usability [29]. This research aligns with the results of Choi et al. (2014), who suggested health interest and health literacy as antecedents of the usefulness of health programs [31]. As Mohammadyari and Singh (2015) investigated the impact of digital literacy on the employees’ intention to use technology in small and medium-sized enterprises (SMEs) context and found that digital literacy significantly impact the users’ performance and intentions to use Web 2.0 tools [32]. In addition, The findings reveal that both information literacy and digital literacy have a direct impact on perceived ease of use of technology but not on the perceive usefulness [30].

Another study by Jang and Sung (2022) on the intention to accept public service policies confirmed that the higher the level of digital literacy, the higher the perceived convenience and usefulness of use [33]. This result indicates that people with higher digital literacy recognize that artificial intelligence technology’s benefits are higher, further strengthening their intention to use various platforms and services. In other words, digital literacy reflects the ability to efficiently utilize digital technology in performing tasks rather than simply reflecting the characteristics of the technology. Therefore, when accepting new technology, if one feels that the technology is complicated and they need a lot of knowledge to use it, or they do not feel the convenience of the new technology, then this is a lack of digital literacy [34].

Based on the above theoretical discussion, one requires a certain level of digital technology utilization ability to use actively-applied AI technology in the digital environment. If the user’s digital literacy is high, we expect the user’s understanding of the value provided by the service or technology to be high. Above all, since access to digital devices requires using various digital technologies, we expect a significant impact of digital literacy on the use of the service. Therefore, this study verifies the developed digital literacy scale’s predictive validity and explores the sub-factors’ influences. Thus, we propose the below research questions:

RQ 2: What are the effects of the sub-dimensions of digital literacy on the usefulness of AI technology?

RQ3: What are the effects of the sub-dimensions of digital literacy on the ease of use of AI technology?

3. Method

3.1 Research Procedure

In this study, we conducted four steps to develop a new AI digital literacy scale while considering AI’s characteristics and securing its validity. First, we collected prior studies on digital literacy development and reviewed sub-factors and measurement scales. Second, we analyzed previous studies regarding recent AI characteristics or literacy research to select the sub-factors and preliminary questions for the digital literacy scale. Then based on that, we proposed initial questions. Third, we conducted exploratory and confirmatory factor analyses to confirm the validity of the constructed digital literacy scale. Fourth, we performed hierarchical regression analysis to confirm the predictive validity of the sub-factors of digital literacy. In this final step, we used the sub-factors as independent variables and perceived usefulness and ease of AI technology as the dependent variables.

3.2 Participants

The participants of this study were 318 college students. We initially collected 389 responses through online and offline surveys for about two weeks, from April 27 to May 8, 2022. Of these, we excluded 71 because of insincere responses, resulting in 318 responses for final analysis. The respondents included 182 (57.2%) males and 136 (42.8%) females. The ages ranged from 15 to 29 years, as follows: 20–24 (n = 253, 79.6%), 15–19 (n = 44, 13.8%), and 25–29 (n = 21, 6.6%). In terms of grades, 56 students (17.6%) were freshman, 112 students (35.2%) were sophomores, 104 students (32.7%) were juniors, and 46 students (14.5%) were seniors.

3.3 Measures

To develop a new digital literacy test tool, we reviewed the concept and scale of digital literacy based on previous studies that identified the characteristics and sub-dimensions of AI literacy. Then we defined AI digital literacy as ‘the ability to use information technology and actively apply artificial intelligence technology, the ability to search information efficiently, the ability to critically accept, utilize, share, and create information obtained through AI technologies from an ethical and moral perspective, and one’s ability to consider the impact of AI technologies on society.’

Based on this definition, we developed four sub-dimensions of digital literacy for AI: the ability to use AI technology, critical comprehension ability, ethical behavior ability, and the ability to recognize the social impact of AI. We reconstructed and developed the survey items according to each dimension and measured the items on a five-point Likert scale (1 = not at all, 5 = very much). The detailed definitions and measurements of the important variables used in the study follow.

The ability to use artificial intelligence technology refers to effective learning and utilizing artificial intelligence technologies and products. As the most basic ability of digital literacy, previous studies have shown that people may use AI technology in daily life and the learning process [21,23,25]. In general, the ability to use AI technology refers to the ability to search and obtain the desired information and the ability to combine and use new information from the obtained data for one’s purpose [2, 35, 36]. In this study, we defined the ability to use artificial intelligence technology as a capability that includes obtaining desired information with the help of artificial intelligence recommendation services and the comprehensive ability to solve one’s problems. Examples include “I can use artificial intelligence technology to find the information or content I need,” “I can create new results using the information or content I searched for,” and “I understand artificial intelligence algorithms.” We used six measurement items in this category.

Critical comprehension ability refers to the ability to interpret and analyze data on the truthfulness, objectivity, and informativeness of the contents presented by the artificial intelligence recommendation service. As the main competency of information literacy, previous studies have discussed it as the ability to access information, evaluate its suitability, use it appropriately for the purpose, and recognize the need for it [33,37]. We used five items to measure this ability. For example, “I don’t think what is presented by artificial intelligence is always true; I know how to make sure that what is presented by artificial intelligence is believable.”

Ethical behavior ability refers to an attitude of critically accepting information obtained using digital tools and technologies from an ethical and moral aspect [16,19,21]. Examples include “When using AI-based big data, I think about whether the information obtained is illegal or reliable,” “I have a standard for distinguishing good content to share from content that is not,” and “I fully consider the position of others when sharing the information I have collected.” We used six items in this category.

The ability to recognize the social impact of artificial intelligence refers to the ability to think about the implications of AI on society and grasp the role of AI technology in human development. This ability is a new competency differentiated from the existing digital literacy, and some researchers have argued that the digital literacy ability should include an attitude of thinking about the influence of AI on society [21,23,25]. Because AI algorithms are logical languages that cannot help but involve the designer’s intentions and various social factors, we cannot perfectly implement fairness and objectivity [38]. Considering the concerns that one can make decisions through judgment, we urgently require a critical perspective and understanding of digital media recommendation and search algorithms. Therefore, in this study, to measure AI’s ability to recognize the social impact, we used statements such as “I’m worried that AI could have a bad influence on humans,” “I know how quickly AI can do things,” and “AI advances. I don’t think they can replace humans.” We used six items to measure AI social impact recognition.

Perceived usefulness is the degree to which one believes using an AI recommendation service can improve performance. We selected the measurement items from previous studies [30,33,39] and modified them for this study. Examples include “The AI recommendation service-based platform provides useful information to me” and “The AI recommendation service increases the efficiency of use.” We used four items to measure this category.

Perceived ease of use is the degree to which people feel that there is no burden in using physical and mental effort in using AI recommendation services. We selected the measurement items from previous studies [30,33,39]. and modified them for this study. Examples include “The AI recommendation service is convenient to use without any difficulties” and “I can easily find the information I want through the functions of AI intelligence recommendation service.” We used four items here.

4. Results

4.1 Validation of Digital Literacy Scale

We selected 23 preliminary questions of the digital literacy scale applicable to AI by reviewing previous studies on recent AI technology characteristics and literacy research. Then, we conducted exploratory factor analysis using SPSS 25.0 on 23 measurement items to verify the scale. However, before conducting exploratory factor analysis, we checked the appropriateness of the data and found it to be significant (p < .001). Thus, we judged it appropriate for exploratory factor analysis; the KMO value was .84, and Bartlett’s index was 2794.647 (df = 253), which was statistically significant.

Then, we performed a principal component analysis by applying the varimax method to the rotation of the factors. We determined factor extraction by considering the eigenvalue of 1.0 or more, the main loading value for each factor being .60 or higher, and the negative loading factor being less than .40 [40]. In addition, high commonality as a ratio explained by the extracted factors means that it is an important variable. In this study, we set the commonality criterion at .40 and deleted the items that did not meet the criterion. As a result of the first exploratory factor analysis, we extracted five sub-factors and determined the names of each sub-factors by referring to the items included in the relevant factors. Lastly, based on the above criteria, after deleting two questions with a commonality of .40 or less and two questions that cannot be composed of a single factor, we performed a secondary exploratory factor analysis on 19 questions (see Table 1).

Table 1. Results of Secondary Exploratory Factor Analysis of Digital Literacy Scale Items

E1KOBZ_2023_v17n8_2241_t0001.png 이미지

Note: We extracted factors with initial eigenvalues greater than 1 through principal component analysis and rotated the factors using the Varimax method.

According to the results, as expected, we extracted four sub-factors and showed an overall explanatory power of 62.23%. Factor 1 is ‘critical comprehension ability’ because it is related to the critical understanding of the content presented by artificial intelligence with five items. Factor 2 consisted of five questions and is ‘AI social impact recognition ability’ because it relates to the ability to adapt to and critically recognize the changing social culture caused by artificial intelligence. Factor 3 also has five questions. It is ‘AI technology utilization ability’ because it concerns the ability to solve problems by using AI services in one’s daily life and study. Lastly, Factor 4 has four questions and is ‘ethical behavioral ability’ because it relates to using AI-based ethical values online. In addition, we conducted a reliability analysis to verify the internal consistency of each sub-factor of digital literacy. All factors had acceptable reliability scores as calculated with Cronbach’s alpha (Cronbach’s ⍺), which coefficient was at least as high as .78.

4.2 Construct Validity of Digital Literacy Scale

This study conducted a confirmatory factor analysis using AMOS 24.0 to verify the construct validity of the AI digital literacy scale. According to the results, the χ2/df value was 2.56, showing the suitability of the research model. In addition, the absolute fit index, the main indicator for model fit, was GFI = .89, RMR = .04, RMSEA = .07; the incremental fit index was CFI = .91, TLI = .90, and the simple fit index was AGFI = .86. Thus, it was possible to confirm the goodness of fit of the final model at a suitable level.

We checked for discriminant validity by examining the correlation between the latent variables to determine whether they have different conceptual configurations. According to literature [41], establishing discriminant validity requires the average variance extraction value (AVE) to be greater than the square of the correlation coefficient or all correlation coefficients to be lower than the square root of the average variance extraction value. We judged the data to have discriminant validity as it met the suggested condition: correlation coefficients were lower than the square root of the average variance extraction value.

Then, we reviewed the normality by calculating each item’s average and standard deviations. We obtained the following results: (1) the mean of the questions ranged from 3.87 to 4.32, (2) the standard deviation ranged from .80 to 1.06, and (3) the range of skewness ranged from −1.20 to −.19, and the range of kurtosis ranged from −.70 to .94. We can see that all items have a normal distribution (meeting the criteria of skewness absolute value of 3 and kurtosis absolute value of 8 or less) [42]. Therefore, we judged that the final items’ descriptive statistics and item distribution were statistically good.

4.3 Predictive Validity of Sub-factors of Digital Literacy

To verify the predictive validity of the four sub-factors derived through exploratory and confirmatory factor analysis, we conducted hierarchical regression analysis with the dependent variables – AI technology usefulness and perceived ease. In other words, we tested the effect of the sub-factors of digital literacy on usefulness and ease.

4.3.1 Predictive Validity of Sub-factors of Digital Literacy

As Table 2 shows, gender (β = −.166, p < .01) was a significant predictor in the first model in which we entered demographic variables first. The result shows that men perceive AI recommendation services as more useful than women. With an increase of 1.7% in the explained variance compared to the first stage, the second model shows that the average Internet use negatively affected the perception of the usefulness of the AI recommendation service (β = −.140, p < .05).

Table 2. Effect on Perception of Usefulness (Result of Q2-1)

E1KOBZ_2023_v17n8_2241_t0002.png 이미지

Note: N = 318, *p < .05. **p < .01. ***p < .001.

In the third model, the ability to use AI technology (β = .224, p < .001) and AI social impact recognition ability (β = .130, p < .05) had positive effects on the usefulness of the AI recommendation service. These results show that people with a higher ability to use AI technology and critically think about how society has changed due to AI technology have an increased perception of the AI recommendation service as useful.

4.3.2 The Impact of Digital Literacy on Perceived Ease of AI Technology Use

We conducted a hierarchical regression analysis to examine the effects of sub-factors of digital literacy on perceived ease of use (see Table 3). In the first step, we input demographic variables, finding that gender (β = −.187, p < .01) and age (β = .172, p < .05) were statistically significant predictors. The explanatory power of the first-stage model was 6.3%.

Table 3. Effect on Perception of Ease of use (Result of Q2-2)

E1KOBZ_2023_v17n8_2241_t0003.png 이미지

Note: N = 318, *p < .05. **p < .01. ***p < .001.

Next, we entered the service use behavior variable in the second step. As a result, with an increase of 2.3 % of the explained variance, compared to the first stage, the second model shows that the frequency of using a video platform (β = .169, p < .01) positively affected the perception of ease of the AI recommendation service. Finally, when entering the four sub-factors of digital literacy into the third step, we found that all factors had a significant positive effect on the dependent variable (perceived ease of AI recommendation service). This result implies that the ability to utilize AI technologies, critical understanding of information (contents), recognition of AI’s social impact on society, and ethical behavior when using AI technologies were associated with the perception of usability of the AI recommendation service.

Meanwhile, when comparing the relative influence of each variable, the most powerful predictor was AI technology utilization ability’ (β = .300, p < .001) and ethical behavior ability (β = .188, p < .001), followed by critical understanding ability (β = .161, p < .01) and AI social influence recognition ability (β = .126, p < .05).

5. Conclusion and Discussion

This study aimed to understand the meaning of digital literacy in the era of artificial intelligence and develop scales and questionnaires to measure digital literacy. We defined digital literacy as the ability to process and utilize information using various digital tools, create new content, communicate and collaborate with others, and solve problems. We also developed a measure for this concept. We achieved this research purpose by reviewing relevant previous literature to derive the sub-dimensions of digital literacy and developed a measurement questionnaire. We proposed 23 preliminary questions and confirmed the final 19 items by conducting validity and reliability verification. Through exploratory and confirmatory factor analysis of the 19 questions, we found that digital literacy in the age of AI had four ability sub-factors: critical understanding, AI social impact recognition, AI technology utilization, and ethical behavior. The main findings of this study are as follows.

First, the new digital literacy required in the age of artificial intelligence is composed mainly of four sub-factors. The first factor is critical comprehension ability, which means interpreting and analyzing data on the truthfulness, objectivity, and informativeness of the content presented by the AI recommendation service. This component is significant in information literacy; it is an integrated concept from previous studies discussing information comprehension [2], text evaluation [43], and critical thinking about information [17]. This result implies that as the AI recommendation system advances, information becomes more readily available, and the emphasis is on the ability to make critical and objective judgments on this information.

The second factor is the ability to use AI technology. This factor is an essential competency of digital literacy, which means the ability to use artificial intelligence to solve problems in one’s daily life and studies. As discussed in previous studies on digital literacy (Kang, Song, & Kim, 2014; S. Kim et al., 2017), this ability to use technology includes using computers, operating software, and using digital [2,15,16]. This finding aligns with Reynolds’ (2008) view that digital literacy is often skills-based and grounded in the uses of technologies [4].

The third factor is AI’s social impact recognition ability, which identifies the role of AI technology in thinking about the impact of AI on society and the process of human development. The AI social impact recognition ability extracted in this study is a new competency not revealed in previous studies. This factor is concerned with the influence of AI on society. As artificial intelligence develops, it no longer needs human commands and can think and judge independently and make decisions. Therefore, we need the ability to adapt and critically perceive the changing social culture due to artificial intelligence.

The fourth factor is ethical behavior competence. This factor refers to the competency of online information ethical awareness and usage attitude. It refers to a multidimensional concept, including aspects such as attitudes toward norms and ethics for constructing a desirable information society, critically looking at information obtained by people using digital tools and technologies from moral and ethical aspects. Recently, with the advancement of AI technology, such as various algorithm-based recommendation services and the activation of online communication, multiple concerns and worries about personal information hacking, distribution of illegal/harmful information, and dissemination of false information are arising [38]. At this point, online norms and ethical behavior skills are necessary competencies that reflect the characteristics of artificial intelligence services.

Considering the four sub-factors derived from this study, digital literacy in the age of AI is the ability to use AI technology/services, efficiently search for information, and critically accept, utilize, and share the obtained information. In addition to the ability to create, we can interpret it as a new level of literacy, including the ability to adapt to and critically perceive the changing social culture due to AI. Therefore, we can understand digital literacy in AI as multidimensional critical literacy based on an objective understanding and judgment of artificial intelligence rather than understanding and using various digital devices discussed in previous studies.

In addition, this study found that developed digital literacy is a variable that can predict the perception of usefulness and ease of artificial intelligence technology. These results align with the research result of Jang and Seong (2022) [33]. However, unlike considering the level of digital media literacy as a single variable, it is meaningful that this study examined the influence of each sub-factor of literacy. Specifically, we confirmed that the ability to use AI technology and recognize the social impact of AI positively affected the perception of the usefulness of artificial intelligence recommendation services. This finding implies an increase in individual awareness of the benefits of AI technology users feel when they understand and correctly use platforms and services. Furthermore, they think that the service is more useful.

In addition, as people objectively recognize and evaluate social and cultural changes caused by AI technology (AI social impact recognition ability), their adaptability to new media environments and their ability to utilize technology increase, raising their awareness of the service’s usefulness. However, critical understanding ability and ethical behavior ability did not appear as significant predictive variables for perceived usefulness. We can explain this concept by referring to the motivation for using video platforms. For instance, since college students use video services for entertainment purposes [44], we can deduce it is because they enjoy watching various videos rather than having an objective understanding of ethical behavior and content.

On the other hand, all four sub-factors of digital literacy significantly affected the recognition of the ease of AI recommendation services. The influence appeared in this order: the ability to use artificial intelligence technology, act ethically, critically understand, and recognize the social impact of artificial intelligence. This result aligns with Choi’s (2021) study, which confirmed that when the ability to understand and utilize digital technology is high, one can easily accept new digital technologies, resulting in increased awareness of digital technologies’ usability [45]. In this respect, the digital literacy developed in this study suggests that it is a necessary ability to live in a changing digital environment due to artificial intelligence technology. In other words, it is a competency essential for acquiring new skills and the daily use of AI technology.

This study developed a new digital literacy scale considering the characteristics of the AI age. It examined the relationship between the sub-factors of digital literacy and the perception of usefulness and ease of use. The significance of this study based on the main conclusions is as follows. From an academic point of view, this study expanded the concept of digital literacy to develop a new measurement tool considering the characteristics of the AI era. The findings are meaningful in seeking a direction for education in future AI from a fundamental and comprehensive perspective. In addition, our results will serve as basic data for follow-up studies that expand the sub-factors of digital and artificial intelligence literacy.

This study has the following practical implications. First, the rapid growth of the online market and the aftermath of COVID-19 has depressed the offline distribution industry. Therefore, all organizations face a reconstructed industrial ecosystem—digital transformation [46]. Also, since this study targeted college students, it has practical significance in helping researchers understand the learning process of college students. In addition, the study highlights implications for future digital business strategies. The study also provides useful strategic tools to identify, adjust, and strengthen users’ digital literacy levels in companies providing AI services. Moreover, this study identified four sub-factors in AI digital literacy. Therefore, companies can choose the most suitable capability to reflect and strengthen the characteristics of their products rather than considering all levels of capability.

The limitations of this study and suggestions for follow-up studies are as follows. First, for the digital literacy proposed in this study to have usefulness as a generalized tool, it is necessary to conduct repeated research (replication) targeting various products or services in the future. Since digital literacy research in the era of AI is in its infancy, scholars should explore different related variables to accumulate effective research. Second, this exploratory study focused on developing and presenting a digital literacy scale. Therefore, it did not consider various factors influencing digital literacy, such as the risk and protective factors in developing and cultivating digital literacy among college students, the impact of digital literacy on college student school life, career preparation, future competency development, social–emotional development, the impact on social participation, etc. Therefore, future studies should verify digital literacy’s influencing factors, thereby making practical contributions to digital literacy-related education, counseling, and social welfare practices. Third, the method used in this study went through a general-scale development process. Therefore, we hope that future studies will achieve high validity with the digital literacy for AI scale through a more systematic scale development process.

References

  1. H. Tinmaz, Y. T. Lee, M. Fanea-Ivanovici and H. Baber, "A systematic review on digital literacy," Smart Learning Environments, vol. 9, no. 1, pp. 1-18, 2022. https://doi.org/10.1186/s40561-022-00204-y
  2. P. Glister, Digital literacy, New York NY, USA:Wiley Computer Pub, 1997. [Online]. Available: http://www.ncsu.edu/meridian/jul99/diglit/index.html
  3. G. Falloon, "From digital literacy to digital competence: the teacher digital competency (TDC) framework," Educational Technology Research and Development, vol. 68, pp. 2449-2472, 2020.
  4. R. Reynolds, "Reconstructing "digital literacy" in a Constructionist computer club: The role of motivation, interest, and inquiry in children's purposive technology use," Ph.D. dissertation, Dept. Mass Com., Syracuse Univ., New York, USA, 2008.
  5. L. De Leon, R. Corbeil and M. E. Corbeil, "The development and validation of a teacher education digital literacy and digital pedagogy evaluation," Journal of Research on Technology in Education, vol. 55, no 3, pp. 477-489, 2023. https://doi.org/10.1080/15391523.2021.1974988
  6. M. Prensky, "Digital Natives, Digital Immigrants Part 2: Do They Really Think Differently?," On the Horizon, Vol. 9, No. 6, pp. 1-6, 2001. https://doi.org/10.1108/10748120110424843
  7. M. Prensky, "Digital wisdom and homo sapiens digital," in Deconstructing digital natives, 2011, pp. 15-29. [Online]. Available: https://www.taylorfrancis.com/chapters/edit/10.4324/9780203818848-3/digital-wisdom-homo-sapiens-digital-marc-prensky
  8. J. Bancroft, "Multiliteracy centers spanning the digital divide: Providing a full spectrum of support," Computers and Composition, vol. 41, pp. 46-55, 2016. https://doi.org/10.1016/j.compcom.2016.04.002
  9. C. I.Martinez-Alcala, A. Rosales-Lagarde, M. D. L. A. Alonso-Lavernia, J. A. Ramirez-Salvador, B. Jimenez-Rodriguez, R. M. Cepeda-Rebollar and R. A. Agis-Juarez, "Digital inclusion in older adults: A comparison between face-to-face and blended digital literacy workshops," Frontiers in ICT, vol. 5, 2018.
  10. Y. Eshet, "Digital literacy: A conceptual framework for survival skills in the digital era," Journal of educational multimedia and hypermedia, vol. 13, no. 1, pp. 93-106, 2004.
  11. D. Bawden, "Information and digital literacies: a review of concepts," Journal of Documentation, vol. 57, no. 2, pp. 218-259, 2001. https://doi.org/10.1108/EUM0000000007083
  12. American Library Association, "Digital literacy, libraries, and public policy: Report of the Office for Information Technology Policy's Digital Literacy Task Force," American Library Association (ALA), 2013. [Online]. Available: https://docs.edtechhub.org/lib/VMF2PYCM
  13. A. Ferrari, "Digital competence in practice: An analysis of frameworks," JRC Technical Reports. Institute for Prospective Technological Studies, Eur, 2012. [Online]. Available: http://www.jrc.ec.europa.eu
  14. I. Blau, T. Shamir-Inbal, O. Avdiel, "How does the pedagogical design of a technology-enhanced collaborative academic course promote digital literacies, self-regulation, and perceived learning of students?," The internet and higher education, vol. 45, 2020.
  15. L. C. Larson, "Digital readers: The next chapter in e-book reading and response," The reading teacher, vol. 64, no. 1, pp. 15-22, 2010. https://doi.org/10.1598/RT.64.1.2
  16. H. S. Kim, "A Study on Standard Digital Competencies of Koreans," Korea Information Culture Promotion Agency, 2004.
  17. J. S. Hahn, J. S. Oh, "The Structural Relations of Factors Affecting the Use of Web-Based Instruction(WBI) in Higher Education," The journal of Educational Studies, vol. 37, no. 2, pp. 127-157, 2006.
  18. J. I. Ahn, Y. K. Seo, "Analysis of detailed factors of the digital media literacy gap," Journal of Digital Convergence, vol. 12, no. 2, pp. 69-78, 2014. https://doi.org/10.14400/JDC.2014.12.2.69
  19. M. H. Chung, J. Y. Kim, H. S. Hwang, "A Study on Development and Validation of Digital Literacy Measurement Tool," Journal of the Internet Information Society, vol. 22, no. 4, pp. 51-63, 2021.
  20. S. Y. Choi, "A Study on Digital Competence in the Era of the 4th Industrial Revolution," Journal of Computer Education Society, vol. 21, no. 5, pp. 25-35, 2018. https://doi.org/10.32431/kace.2018.21.5.003
  21. H. Z. Hou, Y. G. Wang, "The Discussion on the Framework Construction and Training Mechanism of Primary and Middle School Students' Intelligent Literacy In the Era of Artificial Intelligence," Digital Education, vol. 6, no. 6, pp. 50-55, 2020.
  22. C. Y. Lee, "Direction of Software Education in Practical Arts for Cultivating Competencies in the AI Era," Practical education research, vol. 26, no. 2, pp. 41-64, 2020. https://doi.org/10.29113/skpaer.2020.26.2.003
  23. M. Wang, "Construction and Cultivation of Students' Intelligent Literacy Based on Core Literacy," Contemporary Educational Science, no.2, pp. 83-85, 2018.
  24. Y. M. Lee, Y. S. Park, "Establishing a Definition of AI Literacy and Designing a Liberal Arts Education Program," The Journal of Language & Literature, vol. 85, pp. 451-474, 2021.
  25. D. Long, B. Magerko, "What is AI literacy? Competencies and design considerations," in Proc. of the 2020 CHI conference on human factors in computing systems, pp. 1-16, 2020.
  26. R. Saade, B. Bahli, "The impact of cognitive absorption on perceived usefulness and perceived ease of use in on-line learning: an extension of the technology acceptance model," Information & management, vol. 42, no. 2, pp. 317-327, 2005.
  27. F.D. Davis, "A technology acceptance model for empirically testing new end-user information systems: Theory and results," Ph.D. dissertation, Dept. Mass. Insti. of Tec., Wayne State Univ., Detroit, Michigan, USA, 1985.
  28. F.D. Davis, "Perceived usefulness, perceived ease of use, and user acceptance of information technology," MIS quarterly, vol. 13, no. 3, pp. 319-340, 1989. https://doi.org/10.2307/249008
  29. J. S. Lim, H. B. Kim, "A Study on Factors Influencing Preventive Behavioral Intention Related to Particulate Matter: Use of Smartphone and Application of Extended Technology Acceptance Model (TAM II)," Advertising Research, vol. 131, pp. 110-139, 2021. https://doi.org/10.16914/ar.2021.131.110
  30. S. Nikou, M. De Reuver, M. Mahboob Kanafi, "Workplace literacy skills-how information and digital literacy affect adoption of digital technology," Journal of Documentation, vol. 78, no. 7, pp. 371-391, 2022. https://doi.org/10.1108/JD-12-2021-0241
  31. M. Choi, P. K. Seo, M. I. Choi, H. J. Paek, "Factors Associated with Health-Specific TV Viewing Intention: Application of the Technology Acceptance Model," Korean Journal of Journalism & Communication Studies, vol. 58, no. 6, pp. 362-389, 2014.
  32. S. Mohammadyari and H. Singh, "Understanding the effect of e-learning on individual performance: The role of digital literacy," Computers & Education, vol. 82, pp. 11-25, 2015.
  33. C. G. Jang and W. J. Sung, "A Study on the Intent of Adopting Artificial Intelligence-Based Public Service Policy: Focusing on the Effects of Individual Perception and Digital Literacy Level," Informatization Policy, vol. 29, no. 1, pp. 60-83, 2022.
  34. L. Herckis, "Passing the baton: Digital literacy and sustained implementation of elearning technologies," Current Issues in Emerging eLearning, vol. 5, no. 1, 2018.
  35. D. Bawden, "Information and digital literacies: a review of concepts," Journal of Documentation, Vol. 57 No. 2, pp. 218-259, 2001. https://doi.org/10.1108/EUM0000000007083
  36. W. J. Lee, S. H. Kim, E. H. Lee, "Developing a Digital Literacy Curriculum Framework," Journal of educational research, vol. 40, no. 3, pp. 201-221, 2019.
  37. C. S. Doyle, "Outcome Measures for Information Literacy within the National Education Goals of 1990. Final Report to National Forum on Information Literacy. Summary of Findings," ERIC, 1992.
  38. H. S. Jung, K. Heo, "Convergence of Media Literacy and Computational Thinking for Critical Thinking," Journal of korean Culture (JKC), vol. 48, pp. 105-128, 2020. https://doi.org/10.35821/jkc.2020.02.48.105
  39. L. C. Zhu, C. Wang, H. S. Hwang, "A Study on Factors Affecting University Students' Satisfaction with YouTube AI Recommendation System," Journal of Internet Computing and Services, vol. 23, no. 3, pp. 77-85, 2022.
  40. L. R. Fabrigar, D. T. Wegener, Exploratory factor analysis, New York NY, USA: Oxford University Press, 2011. [Online]. Available: http://www.oup.com
  41. C. J. Willmott, "On the validation of models," Physical geography, vol. 2, no. 2, pp. 184-194, 1981. https://doi.org/10.1080/02723646.1981.10642213
  42. R. B. Kline, Principles and practice of structural equation modeling, New York NY, USA: Guilford publications, 2015. [Online]. Available: http://www.guilford.com
  43. J. Y. Kim, et al, "Basic Research for the Development of Cognitive Domain Assessment Tools for Digital Literacy," Cheongram Language and Literature Education, vol. 62, pp. 7-39, 2017.
  44. Liu Cun Zhu, Wai Zhi Lum, Ha Sung Hwang, "A Study on the Satisfaction of Star YouTube Channel Viewers: Focusing on the Mediating Effect of Flow," Journal of Social Science, vol. 61, no. 3, pp. 121-145, 2022. https://doi.org/10.22418/JSS.2022.12.61.3.121
  45. Y. N. Choi, "A Study on the Effect of Digital Technology Acceptance on the Life Satisfaction of Urban Residents in the Era of Covid 19: Focusing on the Moderating Effect of Digital Literacy," Journal of local government studies, vol. 33, no. 3, pp. 187-219, 2021.
  46. S. Satoglu, A. Ustundag, E. Cevikcan, M. B. Durmusoglu, "Lean transformation integrated with Industry 4.0 implementation methodology," Industrial Engineering in the Industry 4.0 Era, pp. 97-107, 2018.