PTQ4SAM、Mamba-Attention、AniTalker、IceFormer、U-DiTs、CogDPM

server/2025/2/14 8:27:25/

本文首发于公众号:机器感知

PTQ4SAM、Mamba-Attention、AniTalker、IceFormer、U-DiTs、CogDPM

图片

PTQ4SAM: Post-Training Quantization for Segment Anything

图片

Segment Anything Model (SAM) has achieved impressive performance in many computer vision tasks. However, as a large-scale model, the immense memory and computation costs hinder its practical deployment. In this paper, we propose a post-training quantization (PTQ) framework for Segment Anything Model, namely PTQ4SAM. First, we investigate the inherent bottleneck of SAM quantization attributed to the bimodal distribution in post-Key-Linear activations. We analyze its characteristics from both per-tensor and per-channel perspectives, and propose a Bimodal Integration strategy, which utilizes a mathematically equivalent sign operation to transform the bimodal distribution into a relatively easy-quantized normal distribution offline. Second, SAM encompasses diverse attention mechanisms (i.e., self-attention and two-way cross-attention), resulting in substantial variations in the post-Softmax distributions. Therefore, we introduce an Adaptive Granularity Quantization for Softmax th......

AniTalker: Animate Vivid and Diverse Talking Faces through  Identity-Decoupled Facial Motion Encoding

图片

The paper introduces AniTalker, an innovative framework designed to generate lifelike talking faces from a single portrait. Unlike existing models that primarily focus on verbal cues such as lip synchronization and fail to capture the complex dynamics of facial expressions and nonverbal cues, AniTalker employs a universal motion representation. This innovative representation effectively captures a wide range of facial dynamics, including subtle expressions and head movements. AniTalker enhances motion depiction through two self-supervised learning strategies: the first involves reconstructing target video frames from source frames within the same identity to learn subtle motion representations, and the second develops an identity encoder using metric learning while actively minimizing mutual information between the identity and motion encoders. This approach ensures that the motion representation is dynamic and devoid of identity-specific details, significantly reducing the n......

Matten: Video Generation with Mamba-Attention

图片

In this paper, we introduce Matten, a cutting-edge latent diffusion model with Mamba-Attention architecture for video generation. With minimal computational cost, Matten employs spatial-temporal attention for local video content modeling and bidirectional Mamba for global video content modeling. Our comprehensive experimental evaluation demonstrates that Matten has competitive performance with the current Transformer-based and GAN-based models in benchmark performance, achieving superior FVD scores and efficiency. Additionally, we observe a direct positive correlation between the complexity of our designed model and the improvement in video quality, indicating the excellent scalability of Matten. ......

SMCD: High Realism Motion Style Transfer via Mamba-based Diffusion

图片

Motion style transfer is a significant research direction in multimedia applications. It enables the rapid switching of different styles of the same motion for virtual digital humans, thus vastly increasing the diversity and realism of movements. It is widely applied in multimedia scenarios such as movies, games, and the Metaverse. However, most of the current work in this field adopts the GAN, which may lead to instability and convergence issues, making the final generated motion sequence somewhat chaotic and unable to reflect a highly realistic and natural style. To address these problems, we consider style motion as a condition and propose the Style Motion Conditioned Diffusion (SMCD) framework for the first time, which can more comprehensively learn the style features of motion. Moreover, we apply Mamba model for the first time in the motion style transfer field, introducing the Motion Style Mamba (MSM) module to handle longer motion sequences. Thirdly, aiming at the SMCD......

IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs

图片

One limitation of existing Transformer-based models is that they cannot handle very long sequences as input since their self-attention operations exhibit quadratic time and space complexity. This problem becomes especially acute when Transformers are deployed on hardware platforms equipped only with CPUs. To address this issue, we propose a novel method for accelerating self-attention at inference time that works with pretrained Transformer models out-of-the-box without requiring retraining. We experiment using our method to accelerate various long-sequence Transformers, including a leading LLaMA 2-based LLM, on various benchmarks and demonstrate a greater speedup of 2.73x - 7.63x while retaining 98.6% - 99.6% of the accuracy of the original pretrained models. The code is available on our project website at https://yuzhenmao.github.io/IceFormer/. ......

Efficient Text-driven Motion Generation via Latent Consistency Training

图片

Motion diffusion models have recently proven successful for text-driven human motion generation. Despite their excellent generation performance, they are challenging to infer in real time due to the multi-step sampling mechanism that involves tens or hundreds of repeat function evaluation iterations. To this end, we investigate a motion latent consistency Training (MLCT) for motion generation to alleviate the computation and time consumption during iteration inference. It applies diffusion pipelines to low-dimensional motion latent spaces to mitigate the computational burden of each function evaluation. Explaining the diffusion process with probabilistic flow ordinary differential equation (PF-ODE) theory, the MLCT allows extremely few steps infer between the prior distribution to the motion latent representation distribution via maintaining consistency of the outputs over the trajectory of PF-ODE. Especially, we introduce a quantization constraint to optimize motion latent r......

U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers

图片

Diffusion Transformers (DiTs) introduce the transformer architecture to diffusion tasks for latent-space image generation. With an isotropic architecture that chains a series of transformer blocks, DiTs demonstrate competitive performance and good scalability; but meanwhile, the abandonment of U-Net by DiTs and their following improvements is worth rethinking. To this end, we conduct a simple toy experiment by comparing a U-Net architectured DiT with an isotropic one. It turns out that the U-Net architecture only gain a slight advantage amid the U-Net inductive bias, indicating potential redundancies within the U-Net-style DiT. Inspired by the discovery that U-Net backbone features are low-frequency-dominated, we perform token downsampling on the query-key-value tuple for self-attention and bring further improvements despite a considerable amount of reduction in computation. Based on self-attention with downsampled tokens, we propose a series of U-shaped DiTs (U-DiTs) in the ......

From Generalization Analysis to Optimization Designs for State Space  Models

图片

A State Space Model (SSM) is a foundation model in time series analysis, which has recently been shown as an alternative to transformers in sequence modeling. In this paper, we theoretically study the generalization of SSMs and propose improvements to training algorithms based on the generalization results. Specifically, we give a \textit{data-dependent} generalization bound for SSMs, showing an interplay between the SSM parameters and the temporal dependencies of the training sequences. Leveraging the generalization bound, we (1) set up a scaling rule for model initialization based on the proposed generalization measure, which significantly improves the robustness of the output value scales on SSMs to different temporal patterns in the sequence data; (2) introduce a new regularization method for training SSMs to enhance the generalization performance. Numerical results are conducted to validate our results. ......

CogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding

图片

Predictive Coding (PC) is a theoretical framework in cognitive science suggesting that the human brain processes cognition through spatiotemporal prediction of the visual world. Existing studies have developed spatiotemporal prediction neural networks based on the PC theory, emulating its two core mechanisms: Correcting predictions from residuals and hierarchical learning. However, these models do not show the enhancement of prediction skills on real-world forecasting tasks and ignore the Precision Weighting mechanism of PC theory. The precision weighting mechanism posits that the brain allocates more attention to signals with lower precision, contributing to the cognitive ability of human brains. This work introduces the Cognitive Diffusion Probabilistic Models (CogDPM), which demonstrate the connection between diffusion probabilistic models and PC theory. CogDPM features a precision estimation method based on the hierarchical sampling capabilities of diffusion models and we......


http://www.ppmy.cn/server/37439.html

相关文章

Python爬虫--爬取糗事百科段子

爬取糗事百科段子&#xff1a; 段子在 <div class"content"> 里面的 <span> 标签里面 不过这里有个坑&#xff0c;div 标签跟 span 标签 之间有很多空行 普通 .*? 是匹配不了的&#xff0c;需要使用模式修饰符 S S 的意思 让 .(点) 匹配&#xff0c…

《十九》Qt Http协议及实战

前言 本篇文章来给大家讲解QT中的Http协议&#xff0c;Http协议主要用于网络中数据的请求和响应&#xff0c;那么这篇文章将给大家讲解一下这个协议。 一、HTTP概述 HTTP&#xff08;超文本传输协议&#xff09;是互联网上应用最为广泛的协议之一&#xff0c;它定义了客户端…

【C语言】预处理器

我们在开始编写一份程序的时候&#xff0c;从键盘录入的第一行代码&#xff1a; #include <stdio.h>这里就使用了预处理&#xff0c;引入头文件。 C预处理器不是编译器的组成部分&#xff0c;但是它是编译过程中一个单独的步骤。简言之&#xff0c;C预处理器只不过是一…

【网络】IP层分片和TCP分段,有MTU为什么还需要MSS

引言 在网络通信中&#xff0c;IP层分片和TCP分段是两个重要的概念&#xff0c;它们分别在网络层和传输层发挥着关键作用。本文将介绍IP层分片的作用和缺陷&#xff0c;解释为何TCP需要分段而不是让IP层进行分片。 IP层分片&#xff1a;作用和缺陷 作用&#xff1a; IP层分…

热敏电阻怎么进行性能测试?并以LabVIEW为例进行说明

过程也可用于执行热敏电阻测量。RTD和热敏电阻遵循非常相似的功能原理&#xff0c;测量步骤与下面提供的步骤相同。有关热敏电阻的更多信息&#xff0c;请参阅本文档。 查找设备引脚排列 在连接任何信号之前&#xff0c;请找到您的设备引脚排列。 打开NI MAX并展开设备和接口。…

k8s ReplicaSet

ReplicaSet 是替代 ReplicationController 的&#xff0c;ReplicaSet 的行为与 ReplicationController 完全相同&#xff0c; 但pod 选择器的表达能力更强。 ReplicaSet 和 ReplicationController 的区别&#xff1a; ReplicationController 的标签选择器只允许包含某个标签的…

KUKA机器人故障报警信息处理(一)

1、KSS00276 机器人参数不等于机器人类型 ①登录专家模式 ②示教器操作&#xff1a;【菜单】—【显示】—【变量】—【单个】 ③名称输入&#xff1a;$ROBTRAFO[] 新值&#xff1a;TRAFONAME[] ④点击【设定值】。 2、电池报警&#xff1a; ①“充电电池警告-发现老化的蓄电池…

java:File类概述和构造方法

一、File类概述和构造方法 1.File类的概述 File&#xff1a;它是文件和目录路径名的抽象表示 文件和目录是可以通过File封装成对象的对File而言&#xff0c;其封装并不是一个真正存在的文件&#xff0c;仅仅是一个路径名而已。它可以是存在的&#xff0c;也可以是不存在的。…