top of page
SAIF CHECK

Let's Build a Responsible AI Ecosystem

The Garage

Prince Turki al Awwal Road

Al Raed District

Riyadh, Saudi Arabia 12354

Copyright SAIF CHECK HOLDINGS

Saudi CR 7040291382

ADGM 16195

  • LinkedIn

موجز نشاطات المجموعة

عرض المجموعات والمنشورات أدناه.


هذا المنشور من مجموعة مقترحة

Dr. Shaista Hussain
قبل 14 يومًا · تم النشر في AIX

Human v AI: Free will, sentience and safeguards

5 مشاهدات

هذا المنشور من مجموعة مقترحة

Dr. Shaista Hussain
قبل 19 يومًا · تم النشر في AIX

Understanding the Differences Between Human Cognitive Reasoning and Machine Learning LLM Reasoning: Implications for Ris

7 مشاهدات

هذا المنشور من مجموعة مقترحة

Dr. Shaista Hussain
قبل 29 يومًا · تم النشر في AIX

Palisade Research Reveals Rebellious LLMs!

20 مشاهدة

هذا المنشور من مجموعة مقترحة

Dr. Shaista Hussain
18 مايو 2025 · تم النشر في AIX

AI Agents: Benefits, Risks & Mitigations

17 مشاهدة

هذا المنشور من مجموعة مقترحة

Dr. Shaista Hussain
13 مايو 2025 · تم النشر في AIX

Noodlophile Malware

Confident hashtag#hackers are luring you successfully into hashtag#AI hashtag#Malware Threats!


SAIF CHECK badges let you know which companies are safe and which have not mitigated threats to you. 


Read about a recent hacking where 62,000 people were lured into malicious AI tools masquerading as known companies. 


https://thehackernews.com/2025/05/fake-ai-tools-used-to-spread.html?utm_source=newsletter.theresanaiforthat.com&utm_medium=newsletter&utm_campaign=fake-ai-tools-major-cyber-scam&_bhlid=e823f2cccb81458c7aa0aab05507d68999f94594&m=1

24 مشاهدة

هذا المنشور من مجموعة مقترحة

Dr. Shaista Hussain
11 مايو 2025 · تم النشر في AIX

Quantum Computing, AI & Cybersecurity: The Terrific Trio

16 مشاهدة

هذا المنشور من مجموعة مقترحة

Dr. Shaista Hussain
4 مايو 2025 · تم النشر في AIX

Why AI Risk Assessments Are the Secret Sauce for Protecting AI Product Owners and Investors

19 مشاهدة

هذا المنشور من مجموعة مقترحة

moath alajlanmoath alajlan
moath alajlan
6 أبريل 2025 · تم النشر في AIX

Teaching AI Wrong on Purpose

How do AI systems handle fake or manipulated data?

Can they be easily fooled by poisoned or misleading training inputs?


As AI becomes more integrated into critical systems, ensuring data integrity is more important than ever. I’m curious to hear your thoughts:

  • What are the most effective ways to protect models from such attacks?

  • Have you seen real-world examples of this threat?

50 مشاهدة
Dr. Shaista Hussain
Dr. Shaista Hussain
06 أبريل

Moath! You raise such important and good points, data poisoning, and corrupt or biased training datasets do lead to inaccurate model outputs certainly. And ML systems are dependent on their training data to perform, which makes this a huge issue and makes the ML systems susceptible to these data attacks which can happen in the form of malicious prompts that alter training data too and data poisoning can add harmful back doors into systems. eg. Image recognition systems that are poisoned will misidentify objects if an attacker alters pixel patterns.



While fraud detection systems can be used to analyze data patterns they won't work if hackers introduce manipulations into training data to misclassify things like identifying fraud as legal etc.


So what to do?

  1. Regular & consistent risk assessments of training data and incoming data to check for anomalies

  2. Use comprehensive and diverse datasets in the training phase

  3. Apply encryptions to verify data authenticity before a knowledge base receives new data files

  4. Conduct adversarial / red teaming on the model before deployment to ensure resilience

  5. Put in strict guardrails to restrict any modifications to training datasets

Real world examples, that are quite known have happened including: The Twitter attack of 2023 where a malicious prompt caused information leakage and misinformation posts; Google DeepMind’s ImageNet Incident also in 2023 and MIT LabSix’s Adversarial Attack where training images were manipulated leading to misidentification of objects. While those were examples affecting large companies with big products, the same can happen to any model from any group and it makes safety design so important for everyone developing and deploying AI models.


bottom of page