0
点赞
收藏
分享

微信扫一扫

华为6605AC控制器大型组网wlan pool技术应用(自动漫游)

Survey

An Overview of Catastrophic AI Risks. [paper]

Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation. [paper]

A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy. [paper]

Adversarial Machine Learning: A Systematic Survey of Backdoor Attack, Weight Attack and Adversarial Example. [paper]

Out-of-Distribution Generalization

Simple and Fast Group Robustness by Automatic Feature Reweighting. [paper]

Optimal Transport Model Distributional Robustness. [paper]

Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization. [paper]

Exact Generalization Guarantees for (Regularized) Wasserstein Distributionally Robust Models. [paper]

Rethinking the Evaluation Protocol of Domain Generalization. [paper]

Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape. [paper]

On the nonlinear correlation of ML performance between data subpopulations. [paper]

An Adaptive Algorithm for Learning with Unknown Distribution Drift. [paper]

PGrad: Learning Principal Gradients For Domain Generalization. [paper]

Benchmarking Low-Shot Robustness to Natural Distribution Shifts. [paper]

eweighted Mixup for Subpopulation Shift. [paper]

ERM++: An Improved Baseline for Domain Generalization. [paper]

Domain Generalization via Nuclear Norm Regularization. [paper]

ManyDG: Many-domain Generalization for Healthcare Applications. [paper]

DEJA VU: Continual Model Generalization For Unseen Domains. [paper]

Alignment with human representations supports robust few-shot learning. [paper]

Free Lunch for Domain Adversarial Training: Environment Label Smoothing. [paper]

Effective Robustness against Natural Distribution Shifts for Models with Different Training Data. [paper]

Leveraging Domain Relations for Domain Generalization. [paper]

Evasion Attacks and Defenses

Jailbroken: How Does LLM Safety Training Fails. [paper]

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service. [paper]

On adversarial robustness and the use of Wasserstein ascent-descent dynamics to enforce it. [paper]

On the Robustness of AlphaFold: A COVID-19 Case Study. [paper]

Data Augmentation Alone Can Improve Adversarial Training. [paper]

Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing. [paper]

Uncovering Adversarial Risks of Test-Time Adaptation. [paper]

Benchmarking Robustness to Adversarial Image Obfuscations. [paper]

Are Defenses for Graph Neural Networks Robust? [paper]

On the Robustness of Randomized Ensembles to Adversarial Perturbations. [paper]

Defensive ML: Defending Architectural Side-channels with Adversarial Obfuscation. [paper]

Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness. [paper]

Poisoning Attacks and Defenses

Poisoning Language Models During Instruction Tuning. [paper]

Backdoor Attacks Against Dataset Distillation. [paper]

Run-Off Election: Improved Provable Defense against Data Poisoning Attacks. [paper]

Temporal Robustness against Data Poisoning. [paper]

Poisoning Web-Scale Training Datasets is Practical. [paper]

CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning. [paper]

TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets. [paper]

Privacy

SoK: Privacy-Preserving Data Synthesis. [paper]

Ticketed Learning-Unlearning Schemes. [paper]

Forgettable Federated Linear Learning with Certified Data Removal. [paper]

Privacy Auditing with One (1) Training Run. [paper]

DPMLBench: Holistic Evaluation of Differentially Private Machine Learning. [paper]

On User-Level Private Convex Optimization. [paper]

Re-thinking Model Inversion Attacks Against Deep Neural Networks. [paper]

A Recipe for Watermarking Diffusion Models. [paper]

CUDA: Convolution-based Unlearnable Datasets. [paper]

Why Is Public Pretraining Necessary for Private Model Training? [paper]

Personalized Privacy Auditing and Optimization at Test Time. [paper]

Interpretability

Towards Trustworthy Explanation: On Causal Rationalization. [paper]

Don't trust your eyes: on the (un)reliability of feature visualizations. [paper]

Probabilistic Concept Bottleneck Models. [paper]

Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. [paper]

eXplainable Artificial Intelligence on Medical Images: A Survey. [paper]

 

 

相关的文章参考

几种信号降噪算法(第一部分)

https://www.toutiao.com/article/7190201924820402721/

几种信号降噪算法(第二部分)

https://www.toutiao.com/article/7190270349236683264/

机械故障诊断及工业工程故障诊断若干例子(第一篇)

https://www.toutiao.com/article/7193957227231855163/

知乎咨询:哥廷根数学学派

算法代码地址,面包多主页:mbd.pub/o/GeBENHAGEN

擅长现代信号处理(改进小波分析系列,改进变分模态分解,改进经验小波变换,改进辛几何模态分解等等),改进机器学习,改进深度学习,机械故障诊断,改进时间序列分析(金融信号,心电信号,振动信号等)

 

 

 

举报

相关推荐

0 条评论