Top 10 AI Attacks Health Care Technology Professionals Need To Know!

  • Thread starter Thread starter benhenderson
  • Start date Start date
B

benhenderson

Top 10 AI Attacks Health Care Technology Professionals Need To Know!


by Ben Henderson, CISSP

Senior Technical Specialist - Security and Compliance


aka.ms/benhenderson



small?v=v2&px=200.jpg



Healthcare organizations and hospitals are at an increasing risk for AI-based attacks. With the rise of AI technology, attackers are finding new ways to exploit vulnerabilities and compromise sensitive data. In fact, data security breaches are one of the major dangers of AI, particularly in healthcare where privileged information is at stake. Recent statistics show that healthcare is one of the most targeted industries for cyber attacks, with a 55% increase in attacks in 2023 alone. It is crucial for healthcare organizations to stay informed and take proactive measures to protect their AI systems and data.



Below is a list of the Top 10 AI based attacks and security concerns. Follow the links in the headings for more information on the specific attacks. Also, feel free to reach out to me directly at aka.ms/benhenderson or as always information can be found on the Microsoft Security site.



  1. Weaponized Models: The data science and AI/ML research domain is still largely based on academia, and there is a lot of exchange happening - whether it's exchange of data or exchange of models. But this can create some big threats to the AI supply chain. For example, recent research demonstrates how attackers could insert harmful code into pretrained machine learning models to execute a ransomware attack. They could achieve this by taking over a valid model on a repo, making it malicious, and then re-uploading that model.

small?v=v2&px=200.jpg



  1. AI Poisoning Attacks: One of the serious concerns for CISOs is a type of attack called poisoning attacks. Essentially, what happens is that attackers change the data that a deep learning model learns from, which can either damage the model or even alter its output. And this is not the end - there are many data integrity issues that can compromise AI integrity. For instance, researchers are studying how feedback loops and AI bias can affect the reliability of AI output.

small?v=v2&px=200.jpg



  1. Data Security Breaches: One of the major dangers of AI is data protection and data privacy. If AI models are not designed with enough privacy safeguards, attackers can expose the privileged information of the data used to train those models. For example, with membership inference attacks, attackers can query models to find out if a certain piece of data is part of a model. This could be a grave issue in healthcare, where it could be possible to reveal that someone has a specific disease if the inference attack verifies they're in an AI model that researches that disease. And then there are training data extraction attacks, like model inversion, which can actually reconstruct training data.

small?v=v2&px=200.jpg



  1. Sponge Attacks: CISOs will face a new challenge in the future of AI, which is a type of Denial-of-Service attack called a sponge attack. This attack aims to make an AI model inaccessible by generating input that consumes the model's hardware resources.

small?v=v2&px=200.jpg



  1. Prompt Injection: One of the basic rules of development is to never trust user input, because it can enable attacks like SQL injection and cross-site scripting. Now, as generative AI becomes more common, CISOs and Health Care Technologists will also have to deal with prompt injections. This is when attackers use harmful prompts to make generative AI produce wrong or malicious output.

small?v=v2&px=200.jpg



  1. Model Theft: AI/ML models are vulnerable to theft not only the data they use, but also their unique logic and methods. One way to steal a model is to hack into private code repositories by tricking or guessing passwords. Another way is to use model extraction attacks, which try to recreate how a model makes predictions by repeatedly asking it questions. This could affect organizations that have developed their own exclusive AI models in-house.

small?v=v2&px=200.jpg



  1. AI-Created Phishing and BEC Traps: Many of the attacks we have discussed so far have targeted enterprise AI applications. But attackers will also use AI to enhance their attacks on all kinds of enterprise systems and applications. One of the major worries is the use of generative AI to produce phishing emails automatically. Security researchers have already seen a rise in phishing frequency and success since ChatGPT was released online.

small?v=v2&px=200.jpg



  1. Evasion Attacks: Evasion attacks are a type of adversarial AI attack that is quite common and well-known. They can fool detection or classification systems by using some visual deception. For example, attackers could put stickers on a stop sign that are designed to make a self-driving car misread it. And more recently, an attack at the Machine Learning Evasion Competition changed celebrity photos slightly so that they were identified as someone else by an AI facial recognition system.

small?v=v2&px=200.jpg



  1. Malware and Vulnerability Exploitation with Generative AI: This one is very scary. Attackers are using AI that can create new code to assist them in making malware and taking advantage of vulnerabilities to increase and expand their attacks more than they have already done with other automated technology. This is another one on the SANS Top 5 Most Dangerous Cyberattacks for 2023 list.

small?v=v2&px=200.jpg



  1. Deepfake Threats: The deepfake technique is no longer a fantasy and has become a realistic way to attack. CISOs should be educating their workers to help them realize that AI-generated media like voice and video are becoming more accessible than ever, making it very simple to fake a CEO or other executive in order to trick workers into falling for business email compromise and other frauds that involve the exchange of large amounts of money. This is only going to increase the already rising risk of BECs.

small?v=v2&px=200.jpg



In conclusion, AI security threats are a growing concern for CISOs. From AI poisoning attacks to deepfake threats, there are many ways attackers can exploit AI technology. It is important for organizations to stay informed and take proactive measures to protect their AI systems and data. For more information on how to secure your AI systems, reach out to me at aka.ms/benhenderson or your local Microsoft partner.



Stay safe out there!

Ben

Continue reading...
 
Back
Top