nr |
titel |
auteur |
tijdschrift |
jaar |
jaarg. |
afl. |
pagina('s) |
type |
1 |
A Study of Using Synthetic Data for Effective Association Knowledge Learning
|
Liu, Yuchi |
|
|
20 |
2 |
p. 194-206 |
artikel |
2 |
Compositional Prompting Video-language Models to Understand Procedure in Instructional Videos
|
Hu, Guyue |
|
|
20 |
2 |
p. 249-262 |
artikel |
3 |
DynamicRetriever: A Pre-trained Model-based IR System Without an Explicit Index
|
Zhou, Yu-Jia |
|
|
20 |
2 |
p. 276-288 |
artikel |
4 |
Editorial for Special Issue on Large-scale Pre-training: Data, Models, and Fine-tuning
|
Wen, Ji-Rong |
|
|
20 |
2 |
p. 145-146 |
artikel |
5 |
EVA2.0: Investigating Open-domain Chinese Dialogue Systems with Large-scale Pre-training
|
Gu, Yuxian |
|
|
20 |
2 |
p. 207-219 |
artikel |
6 |
Mitigating Spurious Correlations for Self-supervised Recommendation
|
Lin, Xin-Yu |
|
|
20 |
2 |
p. 263-275 |
artikel |
7 |
Multimodal Pretraining from Monolingual to Multilingual
|
Zhang, Liang |
|
|
20 |
2 |
p. 220-232 |
artikel |
8 |
Offline Pre-trained Multi-agent Decision Transformer
|
Meng, Linghui |
|
|
20 |
2 |
p. 233-248 |
artikel |
9 |
Pre-training in Medical Data: A Survey
|
Qiu, Yixuan |
|
|
20 |
2 |
p. 147-179 |
artikel |
10 |
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-level Backdoor Attacks
|
Zhang, Zhengyan |
|
|
20 |
2 |
p. 180-193 |
artikel |
11 |
Vision Enhanced Generative Pre-trained Language Model for Multimodal Sentence Summarization
|
Jing, Liqiang |
|
|
20 |
2 |
p. 289-298 |
artikel |