| 8 | 1/1 | 返回列表 |
| 查看: 783 | 回復(fù): 7 | |||
| 【有獎(jiǎng)交流】積極回復(fù)本帖子,參與交流,就有機(jī)會(huì)分得作者 liyangnpu 的 13 個(gè)金幣 ,回帖就立即獲得 1 個(gè)金幣,每人有 1 次機(jī)會(huì) | |||
[交流]
【征稿】Future-Generation Attack and Defense in Neural Networks (FGADNN)
|
|||
|
Special Issue -- Future-Generation Attack and Defense in Neural Networks (FGADNN) Aims & Scopes Neural Networks have demonstrated great success in many fields, e.g., natural language processing, image analysis, speech recognition, recommender system, physiological computing, etc. However, recent studies revealed that neural networks are vulnerable to adversarial attacks. The vulnerability of neural networks, which may hinder their adoption in high-stake scenarios. Thus, understanding their vulnerability and developing robust neural networks have attracted increasing attention. To understand and accommodate the vulnerability of neural networks, various attack and defense techniques have been proposed. According to the stage that the adversarial attack is performed, there are two types of attacks: poisoning attacks and evasion attacks. The former happens at the training stage, to create backdoors in the machine learning model by adding contaminated examples to the training set. The latter happens at the test stage, by adding deliberately designed tiny perturbations to benign test samples to mislead the neural network. According to how much the attacker knows about the target model, there are white-box, gray-box, and black-box attacks. According to the outcome, there are targeted attacks and non-targeted (indiscriminate) attacks. There are also many different attack scenarios, resulted from different combinations of these attack types. Several different adversarial defense strategies have also been proposed, e.g., data modification, which modifies the training set in the training stage or the input data in the test stage, through adversarial training, gradient hiding, transferability blocking, data compression, data randomization, etc.; model modification, which modifies the target model directly to increase its robustness, by regularization, defensive distillation, feature squeezing, using a deep contractive network or a mask layer, etc.; and, auxiliary tools, which may be additional auxiliary machine learning models to robustify the primary model, e.g., adversarial detection models, or defense generative adversarial nets (defense-GAN), high-level representation guided denoiser, etc. Because of the popularity, complexity, and lack of interpretability of neural networks, it is expected that more attacks will immerge, in various different scenarios and applications. It is critically important to develop strategies to defend against them. This special issue focuses on adversarial attacks and defenses in various future-generation neural networks, e.g., CNNs, LSTMs, ResNet, Transformers, BERT, spiking neural networks, and graph neural networks. We invite both reviews and original contributions, on the theory (design, understanding, visualization, and interpretation) and applications of adversarial attacks and defenses, in future-generation natural language processing, computer vision systems, speech recognition, recommender system, etc. Topics of interest include, but are not limited to: • Novel adversarial attack approaches • Novel adversarial defense approaches • Model vulnerability discovery and explanation • Trust and interpretability of neural network • Attacks and/or defenses in NLP • Attacks and/or defenses in recommender systems • Attacks and/or defenses in computer vision • Attacks and/or defenses in speech recognition • Attacks and/or defenses in physiological computing • Adversarial attack and defense various future-generation applications Evaluation Criterion • Novelty of the approach (how is it different from existing ones?) • Technical soundness (e.g., rigorous model evaluation) • Impact (how does it change the state-of-the-arts) • Readability (is it clear what has been done) • Reproducibility and open source: pre-registration if confirmatory claims are being made (e.g., via osf.io), open data, materials, code as much as ethically possible. Submission Instructions All submissions deemed suitable to be sent for peer review will be reviewed by at least two independent reviewers. Authors should prepare their manuscript according to the Guide for Authors available from the online submission page of the Future Generation Computer Systems at https://ees.elsevier.com/fgcs/. Authors should select “VSI: NNVul” when they reach the “Article Type” step in the submission process. Inquiries, including questions about appropriate topics, may be sent electronically to liyangnpu@nwpu.edu.cn. Please make sure to read the Guide for Authors before writing your manuscript. The Guide for Authors and link to submit your manuscript is available on the Journal’s homepage at: https://www.journals.elsevier.co ... n-computer-systems. Important Dates ● Manuscript Submission Deadline: 20th June 2022 ● Peer Review Due: 30th July 2022 ● Revision Due: 15th September 2022 ● Final Decision: 20th October 2022 |
» 搶金幣啦!回帖就可以得到:
+2/224
+2/136
+1/87
+1/36
+2/32
+1/26
+1/13
+1/12
+1/11
+1/10
+1/10
+1/6
+1/6
+1/5
+1/4
+1/4
+1/4
+1/4
+1/3
+1/1
至尊木蟲(chóng) (文壇精英)
| 8 | 1/1 | 返回列表 |
| 最具人氣熱帖推薦 [查看全部] | 作者 | 回/看 | 最后發(fā)表 | |
|---|---|---|---|---|
|
[考研] 085602 293分求調(diào)劑 +3 | SivanNano. 2026-03-05 | 3/150 |
|
|---|---|---|---|---|
|
[基金申請(qǐng)]
|
xhuama 2026-03-02 | 17/850 |
|
|
[考研] 306求調(diào)劑 +3 | Bahati 2026-03-05 | 3/150 |
|
|
[考研] 347求調(diào)劑 +6 | 啊歐歐歐 2026-03-03 | 8/400 |
|
|
[考研] 化學(xué)專(zhuān)業(yè)調(diào)劑 +3 | 好好好1233 2026-03-04 | 3/150 |
|
|
[考研] 0856材料專(zhuān)碩274能調(diào)劑去哪里? +3 | 22735 2026-03-04 | 4/200 |
|
|
[考研] 一志愿武漢理工大學(xué)-085602-總分296分-求調(diào)劑 +7 | 紫川葡柚 2026-03-04 | 7/350 |
|
|
[考研] 266求調(diào)劑 +7 | 哇塞王帥 2026-03-03 | 7/350 |
|
|
[考研] 085600 材料與化工 298 +14 | 小西笑嘻嘻 2026-03-03 | 14/700 |
|
|
[考研]
085600 英一數(shù)二272求調(diào)劑
5+6
|
vida_a 2026-03-01 | 47/2350 |
|
|
[考研] 281求調(diào)劑 +3 | Y?l?h 2026-03-04 | 3/150 |
|
|
[考研]
|
旅行中的紫葡萄 2026-03-03 | 4/200 |
|
|
[考研] 306求調(diào)劑 +7 | 張張張張oo 2026-03-03 | 7/350 |
|
|
[考研] 化工專(zhuān)碩調(diào)劑 +4 | 利好利好. 2026-03-03 | 7/350 |
|
|
[基金申請(qǐng)] 請(qǐng)問(wèn)大家,研究風(fēng)險(xiǎn)與應(yīng)對(duì)措施那里, 大家都怎么寫(xiě)呢 ? +3 | cauasen 2026-03-02 | 3/150 |
|
|
[考研] 理學(xué),工學(xué),農(nóng)學(xué)調(diào)劑,少走彎路,這里歡迎您! +8 | likeihood 2026-03-02 | 11/550 |
|
|
[考研] 268求調(diào)劑 +10 | 簡(jiǎn)單點(diǎn)0 2026-03-02 | 14/700 |
|
|
[考研] 26考研報(bào)考西工大材料308分求調(diào)劑 +4 | weizhong123 2026-03-01 | 5/250 |
|
|
[考研] 0856調(diào)劑 +10 | 劉夢(mèng)微 2026-02-28 | 10/500 |
|
|
[考研] 306分材料調(diào)劑 +5 | chuanzhu川燭 2026-03-01 | 6/300 |
|