echemi logo
Product
  • Product
  • Supplier
  • Inquiry
    Home > Biochemistry News > Biotechnology News > The team of Professor Ma Lizhuang of Shanghai Jiaotong University published 14 high-level papers in CVPR 2022

    The team of Professor Ma Lizhuang of Shanghai Jiaotong University published 14 high-level papers in CVPR 2022

    • Last Update: 2022-04-22
    • Source: Internet
    • Author: User
    Search more information of high quality chemicals, good prices and reliable suppliers, visit www.echemi.com

    Recently, the top computer vision conference CVPR 2022 announced the acceptance list.


    CVPR (IEEE Conference on Computer Vision and Pattern Recognition, IEEE Conference on Computer Vision and Pattern Recognition) is a top conference in the field of computer vision and pattern recognition, and a class A conference recommended by the China Computer Federation


    The following is a brief introduction to the research content and methods of the accepted papers:

    Paper 1: LAKe-Net: Topology-Aware Point Cloud Completion by Localizing Aligned Keypoints

    Authors: Tang Junshu, Gong Zhijun, Yi Ran, Xie Yuan, Ma Lizhuang

    Job Description: Point Cloud Completion aims to complete geometric and topological shapes from locally observed point cloud shapes


    Paper 2: HybridCR: Weakly-Supervised 3D Point Cloud Semantic Segmentation via Hybrid Contrastive Regularization

    Authors: Li Mengtian, Xie Yuan, Shen Yunhang, Ke Bo, Qiao Ruizhi, Ren Bo, Lin Shaohui, Ma Lizhuang

    Work Introduction: In order to solve the problem of high annotation cost in large-scale 3D point cloud semantic segmentation, the team proposes a novel Hybrid Contrast Regularization (HybridCR) weakly supervised learning framework


    Paper 3: Rethinking Efficient Lane Detection via Curve Modeling


    Authors: Feng Zhengyang, Guo Shaohua, Tan Xin, Xu Xu, Wang Min, Ma Lizhuang

    Work introduction: This paper proposes a parametric curve-based image lane line detection method


    Paper 4: ISDNet: Integrating Shallow and Deep Networks for Efficient Ultra-high Resolution Segmentation

    Authors: Guo Shaohua, Liu Liang, Gan Zhenye, Wang Yabiao, Zhang Wuhao, Wang Chengjie, Jiang Guannan, Zhang Wei, Yi Ran, Ma Lizhuang, permission

    Work Introduction: In order to solve the problem of slow inference on super-high-resolution images, the team proposes a new method-ISDNet


    Paper 5: Semantic Segmentation by Early Region Proxy

    Authors: Zhang Yifan, Pang Bo, Lu Cewu

    Job Description: This paper considers the problem of semantic segmentation from a new perspective


    Paper 6: CPPF: Towards Robust Category-Level 9D Pose Estimation in the Wild

    Authors: You Yang, Shi Ruoxi, Wang Weiming, Lu Cewu

    Code address: https://github.


    Work Introduction: This paper mainly studies the problem of class-level 9D pose estimation for a single RGB-D frame in an unknown scene


    Paper 7: UKPGAN: A General Self-Supervised Keypoint Detector

    Authors: You Yang, Liu Wenhai, Zhu Yanjie, Li Yonglu, Wang Weiming, Lu Cewu

    Code address: https://github.


    Job Description: Keypoint detection is very important for object registration and scene alignment


    Paper 8: Canonical Voting: Towards Robust Oriented Bounding Box Detection in 3D Scenes

    Authors: You Yang, Ye Zelin, Lou Yujing, Li Chengkun, Li Yonglu, Ma Lizhuang, Wang Weiming, Lu Cewu

    Code address: https://github.


    Job Description: 3D object detection has become an important step in understanding 3D objects due to advances in deep learning methods for sensors and point clouds


    Paper 9: AKB-48: A Real-World Articulated Object Knowledge Base

    Authors: Liu Liu, Xu Wenqiang, Fu Haoyuan, Qian Sucheng, Han Yang, Lu Cewu

    Work introduction: This paper innovatively proposes a large-scale real-world articulated body knowledge engine AKB-48 for the visual perception and manipulation of real-world articulated bodies, which includes 48 common articulated object categories, a total of 2037 articulated bodies and other articulated bodies.


    Paper 10: Human Trajectory Prediction with Momentary Observation

    Authors: Sun Jianhua, Li Yuxuan, Chai Liang, Fang Haoshu, Li Yonglu, Lu Cewu

    Job Description: Pedestrian trajectory prediction aims to analyze the future motion of humans based on historical states, which is a critical step in many unmanned systems


    Paper 11: ArtiBoost: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis

    Authors: Yang Lixin, Li Kailin, Zhan Xinyu, Lu Jun, Xu Wenqiang, Li Jiefeng, Lu Cewu

    Code address: https://github.
    com/lixiny/ArtiBoost

    Job introduction: The task of hand and object pose estimation (HOPE) plays an indispensable role in the current robot imitation learning, or the basic construction of the metaverse
    .
    The dataset for HOPE needs to improve the diversity in three aspects: hand pose, object category and pose, and observation perspective
    .
    However, most real-world datasets lack this diversity, but synthetic datasets can easily ensure diversity
    .
    "How to construct a reasonable hand-object interaction gesture" and "how to efficiently learn from large-sample synthetic datasets" are the difficulties of this method
    .
    This paper provides a new and efficient solution to the multi-joint hand (+object) pose estimation problem: ArtiBoost
    .
    It covers these three types of diversity based on a "hand-object-viewpoint" space, and augments currently indistinguishable training samples through a reweighted sampling strategy
    .
    During training, ArtiBoost alternates data exploration and synthesis, and blends these data into the real source data
    .
    This method only needs to modify the data set loading method, which can quickly adapt to the existing CNN network, plug and play, and can quickly apply the existing SOTA algorithm to the new hand, hand + object data set and application scenarios
    .

    Paper 12: OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction

    Authors: Yang Lixin, Li Kailin, Zhan Xinyu, Wu Fei, Xu Anran, Liu Liu, Lu Cewu

    Code address: https://oakink.
    net

    Job Description: Learning how humans manipulate objects requires machines to acquire knowledge from both the perspectives of understanding the functionality of objects and learning human interaction/manipulation behaviors based on functionality
    .
    These two knowledges are crucial, but current datasets often lack a comprehensive understanding of them
    .
    This paper proposes a new large-scale knowledge base Oakink, which aims to recognize and understand the interaction behavior of objects through visual information
    .
    First, 1800 common household items were collected and their functionality was marked to form the first knowledge base: Oak (Object Affordance Knowledge)
    .
    Based on the functionality of the objects, the team recorded human interactions with 100 objects in Oak
    .
    In addition, the team proposes a new mapping algorithm, Tink, to map interactions on these 100 objects to their virtual objects
    .
    These collected and mapped hand-object interaction samples constitute the second knowledge base: Ink (Interaction Knowledge)
    .
    Statistically, OakInk contains 50,000 cases of hand-object interaction
    .
    The team also proposed two application scenarios for OakInk: "intent-based interaction generation" and "human-to-human hands-on generation"
    .


    Paper 13: Learning to Anticipate Future with Dynamic Context Removal

    Authors: Xu Xinyu, Li Yonglu, Lu Cewu

    Work Introduction: This paper proposes a training method for contextual frame dynamic elimination for the problem of action prediction in videos
    .
    The team proposed a multi-stage training method with the help of the idea of ​​curriculum-based learning: starting from the learning of the video dynamic logical structure visible in all frames, to the action prediction learning with some additional contextual frame information prompts, and finally achieving no additional contextual frame information.
    forecast
    .
    The team's training method is plug-and-play and can help improve the performance of many general-purpose predictive neural networks
    .
    In terms of experiments, the team achieved state-of-the-art performance on four general datasets, while maintaining the advantage of a small model (less than half the parameters of the baseline)
    .
    The team's code and pre-trained models will be open-sourced in the near future
    .

    Paper 14: Interactiveness Field of Human-Object Interactions

    Authors: Liu Xinpeng*, Li Yonglu*, Wu Xiaoqian, Dai Yurong, Lu Cewu, Deng Zhiqiang

    Job Introduction: Human-object interaction behavior detection plays a central role in the field of behavior understanding
    .
    Although recent two-stage/one-stage approaches have achieved promising results, a key step in it—detecting interacting person-object pairs—remains extremely challenging
    .
    This paper introduces a previously overlooked interactive bimodal prior: for an object in an image, after pairing it with a human body, the generated person-object pairs are either predominantly non-interacting or predominantly interacting pairs , the former is more common
    .
    On the basis of this prior, this paper proposes the concept of interactivity field
    .
    In order to adapt the learned field energy to the scene of real human-object interaction behavior, this paper designs a new energy constraint based on the difference between interacting pairs and non-interacting pairs in the capacity and interactivity fields
    .
    On several widely used human-object interaction behavior datasets, this paper achieves a good improvement over previous methods
    .





    School of Electronic Information and Electrical Engineering




    School of Electronic Information and Electrical Engineering



    This article is an English version of an article which is originally in the Chinese language on echemi.com and is provided for information purposes only. This website makes no representation or warranty of any kind, either expressed or implied, as to the accuracy, completeness ownership or reliability of the article or any translations thereof. If you have any concerns or complaints relating to the article, please send an email, providing a detailed description of the concern or complaint, to service@echemi.com. A staff member will contact you within 5 working days. Once verified, infringing content will be removed immediately.

    Contact Us

    The source of this page with content of products and services is from Internet, which doesn't represent ECHEMI's opinion. If you have any queries, please write to service@echemi.com. It will be replied within 5 days.

    Moreover, if you find any instances of plagiarism from the page, please send email to service@echemi.com with relevant evidence.