Abstract
In recent years, artificial intelligence (AI) has made
significant strides, primarily because of the widespread adoption and
usage of open source machine learning models across various industries.
Given the high resource demands of training the models with large
datasets, many applications now rely on pre-trained models which save
considerable time and resources, allowing organizations to concentrate
on training and sharing these crucial models. However, using open-source
models introduces risks issues in privacy and security that are often
neglected. These models can sometimes harbor hidden functionalities
that, when specific input patterns triggered , can alter system
behavior, such as causing self-driving cars to disregard other vehicles.
The impact of successful privacy and security attacks can range from
minor disruptions in service to extremely serious consequences, which
lead to outcomes such as physical harm or the disclosure of sensitive
user information and data. This research offers an in-depth review of
common privacy and security risks linked to open-source models, aiming
to raise awareness and encourage the responsible and secure use of AI
systems.