SMU Research Data Repository (RDR)
Browse

Data and codes for "Disentangling Multi-view Representations Beyond Inductive Bias"

dataset
posted on 2023-10-06, 08:55 authored by Guanzhou KE, Yang YU, Guoqing CHAO, Xiaoli WANG, Chenyang XU, Shengfeng HeShengfeng He
<p dir="ltr">This record contains the data and codes for this paper:</p><p dir="ltr">Guanzhou Ke, Yang Yu, Guoqing Chao, Xiaoli Wang, Chenyang Xu, and Shengfeng He. 2023. "Disentangling Multi-view Representations Beyond Inductive Bias." In Proceedings of the 31st ACM International Conference on Multimedia (MM '23), October 29–November 3, 2023, Ottawa, ON, Canada. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3581783.3611794</p><p dir="ltr"><i>dmrib-weights</i> is the file for pre-trained weights. <br><i>DMRIB-main </i>is a copy of the project's GitHub Repository at https://github.com/Guanzhou-Ke/DMRIB</p><p dir="ltr">The official repos for ""Disentangling Multi-view Representations Beyond Inductive Bias"" (DMRIB)</p><ul><li>Status: Accepted in ACM MM 2023.</li></ul><h2>Training step</h2><p dir="ltr">We show that how <code>DMRIB</code> train on the <code>EdgeMnist</code> dataset.</p><p dir="ltr">Before the training step, you need to set the <code>CUDA_VISIBLE_DEVICES</code>, because of the <code>faiss</code> will use all gpu. It means that it will cause some error if you using <code>tensor.to()</code> to set a specific device.</p><ol><li>set environment.</li></ol><pre><pre>export CUDA_VISIBLE_DEVICES=0<br></pre></pre><ol><li>train the pretext model. First, we need to run the pretext training script <code>src/train_pretext.py</code>. We use simclr-style to training a self-supervised learning model to mine neighbors information. The pretext config commonly put at <code>configs/pretext</code>. You just need to run the following command in you terminal:</li></ol><pre><pre>python train_pretext.py -f ./configs/pretext/pretext_EdgeMnist.yaml<br></pre></pre><ol><li>train the self-label clustering model. Then, we could use the pretext model to training clustering model via <code>src/train_scan.py</code>.</li></ol><pre><pre>python train_scan.py -f ./configs/scan/scan_EdgeMnist.yaml<br></pre></pre><p dir="ltr">After that, we use the fine-tune script to train clustering model <code>scr/train_selflabel.py</code>.</p><pre><pre>python train_selflabel.py -f ./configs/scan/selflabel_EdgeMnist.yaml<br></pre></pre><ol><li>training the view-specific encoder and disentangled. Finally, we could set the self-label clustering model as the consisten encoder. And train the second stage via <code>src/train_dmrib.py</code>.</li></ol><pre><pre>python train_dmrib.py -f ./configs/dmrib/dmrib_EdgeMnist.yaml<br></pre></pre><h2>Validation</h2><p dir="ltr">Note: you can find the pre-train weights in the file <code><em>dmrib-weights</em></code>. And put the pretrained models into the following folders <code>path to/{config.train.log_dir}/{results}/{config.dataset.name}/eid-{config.experiment_id}/dmrib/final_model.pth</code>, respectively. For example, if you try to validate the <code>EdgeMnist</code> dataset, the default folder is <code>./experiments/results/EdgeMnist/eid-0/dmrib</code>. And then, put the pretrained model <code>edge-mnist.pth</code> into this folder and rename it to <code>final_model.pth</code>.</p><p dir="ltr">If you do not want to use the default setting, you have to modify the line 58 of the <code>validate.py</code>.</p><pre><pre>python validate.py -f ./configs/dmrib/dmrib_EdgeMnist.yaml<br></pre></pre><h2>Credit</h2><p dir="ltr">Thanks: <code>Van Gansbeke, Wouter, et al. "Scan: Learning to classify images without labels." Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X. Cham: Springer International Publishing, 2020.</code></p><h2>Citation</h2><pre><pre>Guanzhou Ke, Yang Yu, Guoqing Chao, Xiaoli Wang, Chenyang Xu,<br>and Shengfeng He. 2023. Disentangling Multi-view Representations Be-<br>yond Inductive Bias. In Proceedings of the 31st ACM International Conference<br>on Multimedia (MM ’23), October 29–November 3, 2023, Ottawa, ON, Canada.<br>ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3581783.3611794</pre></pre><p dir="ltr"><br></p>

History

Related Materials

  1. 1.
  2. 2.

Confidential or personally identifiable information

  • I confirm that the uploaded data has no confidential or personally identifiable information.

Usage metrics

    School of Computing and Information Systems

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC