THREAT MODEL DETECTION USING AI
Keywords:
Artificial Intelligence (AI), Cybersecurity, Threat Model Detection, Anomaly Detection, Predictive Analytics, Adversarial AI, Ethical Implications, Data PrivacyAbstract
This paper explores the integration of Artificial Intelligence (AI) in bolstering threat model detection within cybersecurity frameworks, addressing the increasing sophistication of cyber threats that often outpace traditional security measures. By harnessing AI's capabilities for learning and adapting to new and evolving threat patterns, this study presents AI as a critical tool in augmenting cybersecurity defenses, offering a detailed analysis of AI methodologies in threat detection, including pattern recognition, anomaly detection, and predictive analytics. It critically evaluates the effectiveness of AI against conventional security approaches, highlighting the speed, efficiency, and adaptability of AI technologies. Furthermore, the paper navigates through the potential challenges AI faces, such as data privacy concerns, ethical implications, and the risk of adversarial attacks, while also forecasting future directions in AI-driven cybersecurity enhancements. This comprehensive examination underscores the transformative potential of AI in cybersecurity, urging for advancements in AI technology to stay ahead of cyber adversaries.
References
Mauri L, Damiani E. Modeling Threats to AI-ML Systems Using STRIDE. Sensors. 2022; 22(17):6662. https://doi.org/10.3390/s22176662
Wenjun Xiong, Robert Lagerström, Threat modeling – A systematic literature review, Computers & Security, Volume 84, 2019, Pages 53-69, ISSN 0167-4048, https://doi.org/10.1016/j.cose.2019.03.010.
E. N. Crothers, N. Japkowicz and H. L. Viktor, "Machine-Generated Text: A Comprehensive Survey of Threat Models and Detection Methods," in IEEE Access, vol. 11, pp. 70977-71002, 2023, DOI: 10.1109/ACCESS.2023.3294090. keywords: {Surveys; Threat modeling; Artificial intelligence; Transformers; Natural languages; Text detection; Information integrity; Artificial intelligence; cybersecurity; disinformation; generative AI; large language models; machine learning; text generation; threat modeling; transformer; trustworthy AI},
Rajesh Gupta, Sudeep Tanwar, Sudhanshu Tyagi, Neeraj Kumar, Machine Learning Models for Secure Data Analytics: A taxonomy and threat model, Computer Communications, Volume 153, 2020, Pages 406-440, ISSN 0140-3664, https://doi.org/10.1016/j.comcom.2020.02.008.
Chehri A, Fofana I, Yang X. Security Risk Modeling in Smart Grid Critical Infrastructures in the Era of Big Data and Artificial Intelligence. Sustainability. 2021; 13(6):3196. https://doi.org/10.3390/su13063196
Quentin Rouland, Brahim Hamid, Jason Jaskolka, Specification, detection, and treatment of STRIDE threats for software components: Modeling, formal methods, and tool support, Journal of Systems Architecture, Volume 117, 2021, 102073, ISSN 1383-7621, https://doi.org/10.1016/j.sysarc.2021.102073.
J. Jensen and M. G. Jaatun, "Security in Model Driven Development: A Survey," 2011 Sixth International Conference on Availability, Reliability and Security, Vienna, Austria, 2011, pp. 704-709, DOI: 10.1109/ARES.2011.110.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Dinesh Reddy Chittibala (Author)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.