dataexpert.bsky.social
dataexpert.bsky.social
@dataexpert.bsky.social
5/5 one by detecting the attacks using a deep model; the second one by using adversarial training to improve the robustness of a model against a specific attack, thus making it less vulnerable.
https://link.springer.com/article/10.1007/s41060-023-00438-0
February 9, 2024 at 8:14 PM
4/5 smoother perturbations. We introduced a function to measure the smoothness for time series. Using it, we find that smooth perturbations are harder to detect both visually, by the naked eye and by deep learning models. We also show two ways of protection against adversarial attacks: the first
February 9, 2024 at 8:14 PM
3/5 discernible patterns such as sawtooth and spikes. Adversarial patterns are not perceptible on images, but the attacks proposed to date are readily perceptible in the case of time series. In order to generate stealthier adversarial attacks for time series, we propose a new attack that produces
February 9, 2024 at 8:14 PM
2/5 introduced for image classifiers, and are well studied for this task. For time series, few attacks have yet been proposed. Most that have are adaptations of attacks previously proposed for image classifiers. Although these attacks are effective, they generate perturbations containing clearly
February 9, 2024 at 8:14 PM
7/8 understanding of model predictions through visually interpretable explanations at both local and global levels. Overall, this study aims to bridge the gap between the complexity of ML models and the need for interpretability, ultimately enhancing trust and usability in AI-driven applications.
February 9, 2024 at 8:10 PM
6/8 compared to the use of kernel explainer. By proposing the LIMASE methodology, this work contributes to the field of ML model interpretability and provides a practical solution to address the challenges posed by complex and opaque ML models. The proposed approach empowers users to gain a deeper
February 9, 2024 at 8:10 PM
5/8 explanations, (b) It provides visually interpretable global explanations by plotting local explanations for multiple data points, (c) It offers a solution for the submodular optimization problem, (d) It provides insights into regional interpretation, and (e) It enables faster computation
February 9, 2024 at 8:10 PM
4/8 leverages Shapley values within the LIME paradigm to achieve several objectives: (a) It explains the prediction of any model by utilizing a locally faithful and interpretable decision tree model. The Tree Explainer is employed to calculate the Shapley values, enabling visually interpretable
February 9, 2024 at 8:10 PM
3/8 explainable AI methods to enhance the interpretability and explainability of ML models, thereby increasing the trustworthiness of their predictions. In this study, we propose a methodology called Local Interpretable Model Agnostic Shap Explanations (LIMASE). This ML explanation technique
February 9, 2024 at 8:10 PM
2/8 Unfortunately, many of these models are commonly treated as black boxes, lacking user interpretability. As a result, understanding and trusting the predictions made by such complex ML models have become more challenging. However, researchers have developed various frameworks that employ
February 9, 2024 at 8:10 PM