Child Media Literacy in 2024, a current study by Western Sydney University, found alarmingly low amounts of media literacy among Australians, especially given the deep fake features that younger AI technologies have added.
Given that people error continues to be the main cause of security vulnerabilities, this defect poses a threat to IT security. The need for a coherent regional response is more essential than ever as deepfakes and disinformation become more complex, according to the report.
Because AI may produce very encouraging disinformation, the risk of people mistake becomes magnified. Individuals without media training are more likely to fall victim to such schemes, which could compromise sensitive information or systems.
the growing risk of deepfakes and propaganda
While AI has obvious advantages over how to produce and distribute information, it also poses new challenges, such as deepfakes and disinformation, that require a high level of media literacy across the country to combat.
According to Tanya Notley, an associate professor at Western Sydney University who participated in the report on child advertising education, AI introduces some certain difficulties to press education.
” It’s really just getting harder and harder to discover where AI has been used”, she told TechRepublic.
People may be able to tell the difference between a reliable source and one that is possible to post deepfakes in order to overcome these difficulties.
Unfortunately, about 1 in 3 Australians ( 34 % ) report having “low confidence” in their media literacy. Education is a factor because only one in four Australians ( 25 % ) with a low level of education said they had confidence in checking information they found online.
Why is media literacy important for computer security
Although it may not be immediately evident, the link between media fluency and cyber security is crucial. Recent research from Proofpoint found that 74 % of CISOs consider human error to be the “most significant” vulnerability in organisations.
This problem is made worse by poor multimedia education. People become more vulnerable to common computer security threats, including spoofing scams, social engineering, and various direct-fired techniques of manipulation, when they are unable to accurately determine the accuracy of information.
An now notorious instance of this occurred in May when scammers properly persuaded an individual to move$ 25 million to a number of Hong Kong bank accounts using a algorithmic to deceive the CFO of an engineering firm, Arup.
The importance of advertising education in regional security
As Notley noted, improving advertising education is never just a matter of education. It is a national surveillance requirement, especially in Australia, a country where there is already a shortage of cyber security capabilities.
” Focusing on one item, which many people have, such as legislation, is inadequate”, she said. ” We basically have to have a multi-pronged view, and media education does a number of different items. One of the goals is to expand person’s understanding of how conceptual AI is being used, as well as how to think critically and pose questions about it.
According to Notley, this multi-pronged technique may include:
- Education in media literacy: Community organizations and academic institutions should put together robust media literacy programs that train people to thoroughly examine digital content. This course should cover both the subtleties of AI-generated glad and conventional media.
- Governments must create and enforce laws that hold digital platforms accountable for the information they number, according to regulation and plan. This includes requiring accountability for the accuracy of AI-generated information and making sure programs take proactive steps to stop the spread of disinformation.
- Common education campaigns are necessary to raise awareness of the dangers of having little media knowledge and the value of essential information consumption. These activities ought to be targeted at all populations, even those who are less likely to have digital literacy.
- Business collaboration: The IT market plays a vital role in enhancing media education. Technology companies can collaborate with organizations like the American Media Literacy Alliance to create tools and resources that can help users recognize and combat disinformation.
- Training and education: Media education should be a core component of staff training and be constantly updated to reflect changes in the environment, merely as first aid and work safety drills are regarded as essential.
How advertising education may be supported by the IT sector
Media fluency as a key component of security is a unique responsibility for the IT sector. Tech companies may make it easier for users to navigate the online landscape more properly by creating tools that can identify and flag AI-generated content.
CISOs are also optimistic about the potential of AI-powered solutions and different technologies to reduce human-centric risks, as evidenced by the Proofpoint research, which suggests that technology can be the answer to the issue that technology creates.
But, it’s also important to build a society without blame. One of the biggest causes of human error is a chance is that many people are afraid to speak up out of fear of punishment or also losing their jobs.
Finally, one of the biggest defences we have against misinformation is the free and comfortable exchange of information, and so the CISO and IT team really actively encourage people to speak up, flag content that concerns them, and, if they’re worried that they have fallen for a algorithmic, to report it immediately.