Maintenance window scheduled to begin at February 14th 2200 est. until 0400 est. February 15th

(e.g. yourname@email.com)

Forgot Password?

    Defense Visual Information Distribution Service Logo

    Dr. Yubei Chen

    Advanced Embed Example

    Add the following CSS to the header block of your HTML document.

    Then add the mark-up below to the body block of the same document.

    UNITED STATES

    03.22.2024

    Video by Kevin D Schmidt 

    Air Force Research Laboratory

    In this edition of QuEST, Dr. Yubei Chen discusses his work on Principles of Unsupervised Representation Learning

    Key Moments in the video include:
    Introduction to Dr. Chen’s lab, mentors, and collaborators
    Current Machine Learning Paradigm
    Natural intelligence learns with intrinsic objectives
    Future machine learning paradigm and unsupervised representation learning
    Defining Unsupervised representation learning
    Supervision and similarities - spatial co-occurrence, temporal co-occurrence, Euclidean neighborhoods

    Main points:
    - derive unsupervised representation transform from neural and statistical principles
    - simplification and unification of deep unsupervised learning
    - the convergence
    Neural principle: sparse coding
    Statistical principle: manifold learning
    Manifold learning, local linear embedding
    Sparse manifold transform
    Encoding of a natural video sequence
    Recap of Main points

    Audience questions:
    On the three sources of similarity, do you think there is a way to map semantic similarities from crowdsourcing kinds of things like Concept Net?
    Are there equivalencies here with cryo-EM analyses?
    One of the things that made deep learning what it is their performance on ImageNet and AlexNet, right? Same thing with transformers and language translation, so how are you going to demonstrate that this impressive body of work is better than whatever state-of-the-art is out there? How are you going to demonstrate that it’s useful?
    Follow-up: Is there a benchmark or standard data set, which you might produce, that establishes something about representation learning?
    Co-occurrence is great for a lot of things but a poor choice for comparison when you have different dimensions for valuation you might want to do? Are you thinking about extending your ideas beyond things that are co-occurring or similar along one dimension and further away?
    Is there any sort of procedure for pruning vestigial actions that are no longer necessary for the interpolated tasks that won’t just propagate down for future interpolations?

    LEAVE A COMMENT

    VIDEO INFO

    Date Taken: 03.22.2024
    Date Posted: 03.28.2024 17:10
    Category: Video Productions
    Video ID: 917342
    VIRIN: 240322-O-BA826-5935
    Filename: DOD_110205007
    Length: 01:18:31
    Location: US

    Video Analytics


    Downloads: 1
    High-Res. Downloads: 1

    PUBLIC DOMAIN