Despite the fast-growing adoption of AI, new research shows that just 37% of IT decision-makers are making AI security a priority when implementing the technology
CREDIT: This is an edited version of an article that originally appeared in SME Today
In a survey of over 2,000 IT decision-makers, 94% said AI is now a core part of their organisation’s strategy. However, as AI becomes more widely used, organisations are exposed to new risks from cybercriminals. Only around one-third of respondents identified cybersecurity as one of their top three concerns during AI adoption.
Notably, when asked about the role of security in their AI strategy, 37% of respondents described it as a compliance requirement, an unnecessary expense, or non-essential. This suggests that many organisations see investing in cybersecurity as a hurdle rather than a priority, leaving them exposed to potential breaches and compliance issues.
Practices are increasingly using AI for tasks like administration, learning analytics and communication. But with limited budgets, they can’t afford mistakes in how these tools are adopted. Even a small security lapse could put sensitive staff and patient data at risk, disrupt daily operations, or create wider network problems.
Practice business leaders and IT leaders have an important role in shaping perceptions of AI security, working together to ensure it is seen as a critical enabler of safe and responsible adoption rather than a hindrance.
As patients gain access to AI tools via patient portals or websites, they can inadvertently introduce security risks. Misuse of AI, accidental sharing of sensitive information, or engagement with unsafe platforms can expose practices and patients to cyber threats. IT and practice leaders must prepare guidance and safeguards to ensure patients can use any AI associated with the practice or their healthcare management with care and attention.
Encouragingly, 42% of IT decision-makers report taking a proactive approach to AI security, integrating it into both development and strategic planning. This should involve staff training to ensure employees understand how to use AI safely and serve as a first line of defence against cyber threats. With cyber-attacks on the rise and AI becoming increasingly central to operations, the potential risks are growing. Without proper safeguards, organisations may be leaving themselves exposed.
AI is already a technology that some staff and stakeholders may be wary of, bringing new fears or concerns. At the same time, it introduces risks that practices and trusts need to plan for carefully, with thorough assessments carried out before tools are rolled out. IT leaders must work closely with practices to understand the nature of these threats and how they could affect both systems and data. Meanwhile, practice business leaders should ask the right questions, challenge assumptions and making sure security is factored into every decision.




Be the first to comment