通过视觉语言模型蒸馏进行 3D 形状零件分割

server/2025/1/19 19:22:22/

         大家读完觉得有帮助记得关注和点赞!!!对应英文要求比较高,特此说明!

Abstract

This paper proposes a cross-modal distillation framework, PartDistill, which transfers 2D knowledge from vision-language models (VLMs) to facilitate 3D shape part segmentation. PartDistill addresses three major challenges in this task: the lack of 3D segmentation in invisible or undetected regions in the 2D projections, inconsistent 2D predictions by VLMs, and the lack of knowledge accumulation across different 3D shapes. PartDistill consists of a teacher network that uses a VLM to make 2D predictions and a student network that learns from the 2D predictions while extracting geometrical features from multiple 3D shapes to carry out 3D part segmentation. A bi-directional distillation, including forward and backward distillations, is carried out within the framework, where the former forward distills the 2D predictions to the student network, and the latter improves the quality of the 2D predictions, which subsequently enhances the final 3D segmentation. Moreover, PartDistill can exploit generative models that facilitate effortless 3D shape creation for generating knowledge sources to be distilled. Through extensive experiments, PartDistill boosts the existing methods with substantial margins on widely used ShapeNetPart and PartNetE datasets, by more than 15% and 12% higher mIoU scores, respectively. The code for this work is available at https://github.com/ardianumam/PartDistill.

1Introduction

3D shape part segmentation is essential to various 3D vision applications, such as shape editing [23, 45], stylization [29], and augmentation [40]. Despite its significance, acquiring part annotations for 3D data, such as point clouds or mesh shapes, is labor-intensive and time-consuming.

Zero-shot learning [41, 8] generalizes a model to unseen categories without annotations and has been notably uplifted by recent advances in vision-language models (VLMs) [37, 22, 46, 21]. By learning on large-scale image-text data pairs, VLMs show promising generalization abilities on various 2D recognition tasks. Recent research efforts [48, 52, 24, 1] have been made to utilize VLMs for zero-shot 3D part segmentation, where a 3D shape is projected into multi-view 2D images, and a VLM is applied to these images for 2D prediction acquisition. Specifically, PointCLIP [48] and PointCLIPv2 [52] produce 3D point-wise semantic segmentation by averaging their corresponding 2D pixel-wise predictions. Meanwhile, PartSLIP [24] and SATR [1] present a designated weighting mechanism to aggregate multi-view bounding box predictions.

Refer to caption

Figure 1: We present a distillation method that carries out zero-shot 3D shape part segmentation with a 2D vision-language model. After projecting an input 3D point cloud into multi-view 2D images, the 2D teacher (2D-T) and the 3D student (3D-S) networks are applied to the 2D images and 3D point cloud, respectively. Instead of direct transfer, our method carries bi-directional distillations, including forward and backward distillations, and yields better 3D part segmentation than the existing method.

The key step of zero-shot 3D part segmentation with 2D VLMs, e.g., [48, 52, 24, 1], lies in the transfer from 2D pixel-wise or bounding-box-wise predictions to 3D point segmentation. This step is challenging due to three major issues. First (𝓘𝟏), some 3D regions lack corresponding 2D predictions in multi-view images, which are caused by occlusion or not being covered by any bounding boxes, illustrated with black and gray points, respectively, in Fig. 1. This issue is considered a limitation in the previous work [48, 52, 24, 1]. Second (𝓘𝟐), there exists potential inconsistency among 2D predictions in multi-view images caused by inaccurate VLM predictions. Third (𝓘𝟑), existing work [48, 52, 24, 1] directly transfers 2D predictions to segmentation of a single 3D shape. The 2D predictions yielded based on appearance features are not optimal for 3D geometric shape segmentation, while geometric evidence given across different 3D shapes is not explored.

To alleviate the three issues 𝓘𝟏∼𝓘𝟑, unlike existing methods [48, 52, 24, 1] directly transferring 2D predictions to 3D segmentation, we propose a cross-modal distillation framework with a teacher-student model. Specifically, a VLM is utilized as a 2D teacher network, accepting multi-view images of a single 3D shape. The VLM is pre-trained on large-scale image-text pairs and can exploit appearance features to make 2D predictions. The student network is developed based on a point cloud backbone. It is derived from multiple, unlabeled 3D shapes and can extract point-specific geometric features. The proposed distillation method, PartDistill, leverages the strengths of both networks, hence improving zero-shot 3D part segmentation.

The student network learns from not only the 2D teacher network but also 3D shapes. It can extract point-wise features and segment 3D regions uncovered by 2D predictions, hence tackling issue 𝓘𝟏. As a distillation-based method, PartDistill tolerates inconsistent predictions between the teacher and student networks, which alleviates issue 𝓘𝟐 of negative transfer caused by wrong VLM predictions. The student network considers both appearance and geometric features. Thus, it can better predict 3D geometric data and mitigate issue 𝓘𝟑. As shown in Fig. 1, the student network can correctly predict the undetected arm of the chair (see the black arrows) by learning from other chairs.

PartDistill carries out a bi-directional distillation. It first forward distills the 2D knowledge to the student network. We observe that after the student integrates the 2D knowledge, we can jointly refer both teacher and student knowledge to perform backward distillation which re-scores the 2D knowledge based on its quality. Those of low quality will be suppressed with lower scores, such as from 0.6 to 0.1 for the falsely detected arm box in Fig. 1, and vice versa. Finally, this re-scored knowledge is utilized by the student network to seek better 3D segmentation.

The main contributions of this work are summarized as follows. First, we introduce PartDistill, a cross-modal distillation framework that transfers 2D knowledge from VLMs to facilitate 3D part segmentation. PartDistill addresses three identified issues present in existing methods and generalizes to both VLM with bounding-box predictions (B-VLM) and pixel-wise predictions (P-VLM). Second, we propose a bi-directional distillation, which involves enhancing the quality of 2D knowledge and subsequently improving the 3D predictions. Third, PartDistill can leverage existing generative models [31, 33] to enrich knowledge sources for distillation. Extensive experiments demonstrate that PartDistill surpasses existing methods by substantial margins on widely used benchmark datasets, ShapeNetPart [44] and PartNetE [24], with more than 15% and 12% higher mIoU scores, respectively. PartDistill consistently outperforms competing methods in zero-shot and few-shot scenarios on 3D data in point clouds or mesh shapes.

Refer to caption

Figure 2:Overview of the proposed method. (a) The overall pipeline where the knowledge extracted from a vision-language model (VLM) is distilled to carry out 3D shape part segmentation by teaching a 3D student network. Within the pipeline, backward distillation is introduced to re-score the teacher’s knowledge based on its quality and subsequently improve the final 3D part prediction. (b) (c) Knowledge is extracted by back-projection when we adopt (b) a bounding-box VLM (B-VLM) or (c) a pixel-wise VLM (P-VLM), where &Γ and ℂ denote 2D-to-3D back-projection and connected component labeling [3], respectively.

2Related Work

Vision-language models.

Based on learning granularity, vision-language models (VLMs) can be grouped into three categories, including the image-level [37, 15], pixel-level [21, 27, 53], and object-level [22, 46, 25] categories. The second and the third categories make pixel-level and bounding box predictions, respectively, while the first category produces image-level predictions. Recent research efforts on VLMs have been made for cross-level predictions. For example, pixel-level predictions can be derived from an image-level VLM via up-sampling the 2D features into the image dimensions, as shown in PointCLIPv2 [52]. In this work, we propose a cross-modal distillation framework that learns and transfers knowledge from a VLM in the 2D domain to 3D shape part segmentation.

3D part segmentation using vision-language models.

State-of-the-art zero-shot 3D part segmentation [24, 52, 1] is developed by utilizing a VLM and transferring its knowledge in the 2D domain to the 3D space. The pioneering work PointCLIP [48] utilizes CLIP [37]. PointCLIPv2 [52] extends PointCLIP by making the projected multi-view images more realistic and proposing LLM-assisted text prompts [4], hence producing more reliable CLIP outputs for 3D part segmentation.

Both PointCLIP and PointCLIPv2 rely on individual pixel predictions in 2D views to get the predictions of the corresponding 3D points, but individual pixel predictions are less unreliable. PartSLIP [24] suggests to extract superpoints [20] from the input point cloud. Therefore, 3D segmentation is estimated for each superpoint by referring to a set of relevant pixels in 2D views. PartSLIP uses GLIP [22] to output bounding boxes and further proposes a weighting mechanism to aggregate multi-view bounding box predictions to yield 3D superpoint predictions. SATR [1] shares a similar idea with PartSLIP but handles 3D mesh shapes instead of point clouds.

Existing methods [24, 52, 48, 1] directly transfer VLM predictions from 2D images into 3D spaces and pose three issues: (𝓘𝟏) uncovered 3D points, (𝓘𝟐) negative transfer, and (𝓘𝟑) cross-modality predictions, as discussed before. We present a distillation-based method to address all three issues and make substantial performance improvements.

2D to 3D distillation.

Seminal work of knowledge distillation [5, 14] aims at transferring knowledge from a large model to a small one. Subsequent research efforts [39, 28, 50, 26, 43] adopt this idea of transferring knowledge from a 2D model for 3D understanding. However, these methods require further fine-tuning with labeled data. OpenScene [34] and CLIP2Scene [7] require no fine-tuning and share a similar concept with our method of distilling VLMs for 3D understanding, with ours designed for part segmentation and theirs for indoor/outdoor scene segmentation. The major difference is that our method can enhance the knowledge sources in the 2D modality via the proposed backward distillation. Moreover, our method is generalizable to both P-VLM (pixel-wise VLM) and B-VLM (bounding-box VLM), while their methods are only applicable to P-VLM.

3Proposed Method

3.1Overview

Given a set of 3D shapes, this work aims to segment each one into R semantic parts without training with any part annotations. To this end, we propose a cross-modal bi-directional distillation frameworkPartDistill, which transfers 2D knowledge from a VLM to facilitate 3D shape part segmentation. As illustrated in Fig. 2, our framework takes triplet data as input, including the point cloud of the shape with N 3D points, multi-view rendered images from the shape in V different poses, and R text prompts with each describing the target semantic parts within the 3D shapes.

For the 2D modality, the V multi-view images and the text prompts are fed into a Bounding-box VLM (B-VLM) or Pixel-wise VLM (P-VLM). For each view v, a B-VLM produces a set of bounding boxes, Bv={bi}i=1β while a P-VLM generates pixel-wise predictions Sv. We then perform knowledge extraction (Sec 3.2) for each Bv or Sv; Namely, we transfer the 2D predictions into the 3D space through back-projection for a B-VLM or connected-component labeling [3] followed by back-projection for a P-VLM, as shown in Fig. 2 (b) and Fig. 2 (c), respectively. Subsequently, a set of D teacher knowledge units, 𝒦={k}d=1D={Yd,Md}d=1D, is obtained by aggregating from all V multi-view images. Each unit d comprises point-wise part probabilities, Yd∈ℝN×R, from the teacher VLM network, accompanied with a mask, Md∈{0,1}N, identifying the points included in this knowledge unit.

For the 3D modality, the point cloud is passed into the 3D student network with a 3D encoder and a distillation head, producing point-wise part predictions, Y~∈ℝN×R. With the proposed bi-directional distillation framework, we first forward distill teacher’s 2D knowledge by aligning Y~ with 𝒦 via minimizing the proposed loss, ℒd⁢i⁢s⁢t⁢i⁢l⁢l, specified in Sec 3.2. The 3D student network integrates 2D knowledge from the teacher through optimization. The integrated student knowledge Y~′ and the teacher knowledge 𝒦 are then jointly referred to perform backward distillation from 3D to 2D, detailed in Sec. 3.3, which re-scores each knowledge unit kd based on its qualities, as shown in Fig. 2. Finally, the re-scored knowledge 𝒦′ is used to refine the student knowledge to get final part segmentation predictions Y~f by assigning each point to the part with the highest probability.

3.2Forward distillation: 2D to 3D

Our method extracts the teacher’s knowledge in the 2D modality and distills it in the 3D space. In the 2D modality, V multi-view images {Iv∈ℝH×W}v=1V are rendered from the 3D shape, e.g., using the projection method in [52]. These V multi-view images together with the text prompts T of R parts are passed to the VLM to get the knowledge in 2D spaces. For a B-VLM, a set of β bounding boxes, Bv={bi}i=1β, is obtained from the v-th image, with bi∈ℝ4+R encoding the box coordinates and the probabilities of the R parts. For a P-VLM, a pixel-wise prediction map Sv∈ℝH×W×R is acquired from the v-th image. We apply knowledge extraction to each Bv and each Sv to obtain a readily distillable knowledge 𝒦 in the 3D space, as illustrated in Fig. 2 (b) and Fig. 2 (c), respectively.

For a B-VLM, bounding boxes can directly be treated as the teacher knowledge. For a P-VLM, knowledge extraction starts by applying connected-component labeling [3] to Sv to get a set of ρ segmentation components, {si∈ℝH×W×R}i=1ρ, indicating if the r-th part is with the highest probability in each pixel. We summarize the process when applying a VLM to a rendered image and the part text prompts as

VLM⁢(Iv,T)={Bv={bi}i=1β,for B-VLM,ℂ⁢(Sv)={si}i=1ρ,for P-VLM,(1)

where ℂ denotes connected-component labeling.

We then back-project each box bi or each prediction map si to the 3D space, i.e.,

ki=(Yi,Mi)={Γ⁢(bi),for B-VLM,Γ⁢(si),for P-VLM,(2)

where Γ denotes the back-projection operation with the camera parameters [49] used for multi-view image rendering, Yi∈ℝN×R is the point-specific part probabilities, and Mi∈{0,1}N is the mask indicating which 3D points are covered by bi or si in the 2D space. The pair (Yi,Mi) yields a knowledge unit, ki, upon which the knowledge re-scoring is performed in the backward distillation.

For the 3D modality, a 3D encoder, e.g., Point-M2AE [47], is applied to the point cloud and obtains per-point features, O∈ℝN×E, capturing local and global geometrical information. We then estimate point-wise part prediction, Y~∈RN×R, by feeding the point features O into the distillation head. The cross-modal distillation is performed by teaching the student network to align the part probability from the 3D modality Y~ to their 2D counterparts Y via minimizing our designated distillation loss.

Distillation loss.

Via Eq. 1 and Eq. 2, we assume that D knowledge units, 𝒦={kd}d=1D={Yd,Md}d=1D, are obtained from the multi-view images. The knowledge 𝒦 exploits 2D appearance features and is incomplete as several 3D points are not covered by any 2D predictions, i.e., issue 𝓘𝟏. To distill this incomplete knowledge, we utilize a masked cross-entropy loss defined as

ℒd⁢i⁢s⁢t⁢i⁢l⁢l=−∑d=1D1|Md|⁢∑n=1N∑r=1RMnd⁢Cnd⁢Zn,rd⁢log⁢(Y~n,r),(3)

where Cnd=max𝑟⁢(Ynd⁢(r)) is the confidence score of kd on point n. Zn,rd takes value 1 if part r receives the highest probability on kd, and 0 otherwise. |Md| is the area covered by the mask Md.

By minimizing Eq. 3, we teach the student network to align its prediction Y~ to the distilled prediction Y by considering the points covered by the mask and using the confidence scores as weights. Despite learning from incomplete knowledge, the student network extracts point features that capture geometrical information of the shape, thus enabling it to reasonably segment the points that are not covered by 2D predictions, hence addressing issue 𝓘𝟏. This can be regarded as interpolating the learned part probability in the feature spaces by the distillation head.

As a distillation-based method, our method allows partial inconsistency among the extracted knowledge 𝒦={kd}d=1D caused by inaccurate VLM predictions, thereby alleviating issue 𝓘𝟐 of negative transfer. In our method, the teacher network works on 2D appearance features, while the student network extracts 3D geometric features. After distillation via Eq. 3, the student network can exploit both appearance and geometric features from multiple shapes, hence mitigating issue 𝓘𝟑 of cross-modal transfer. It is worth noting that unlike the conventional teacher-student models [14, 11, 13] which solely establish a one-to-one correspondence, we further re-score each knowledge unit kd based on its quality (Sec. 3.3), and improve distillation by suppressing low-quality knowledge units.

3.3Backward distillation: 3D to 2D

In Eq. 3, we consider all knowledge units {kd}d=1D, weighted by their confidence scores. However, due to the potential VLM mispredictions, not all knowledge units are reliable. Hence, we refine the knowledge units by assigning higher scores to those of high quality and suppressing the low-quality ones. We observe that once the student network has thoroughly integrated the knowledge from the teacher, we can jointly refer both teacher and integrated student knowledge Y~′ to achieve the goal, by re-scoring the confidence score Cd to Cb⁢dd as:

Cb⁢dd=|Md(argmax(Yd)⇔argmax(Y~′))||Md|,(4)

where ⇔ denotes the element-wise equality (comparison) operation. In this way, each knowledge unit kd is re-scored: Those with high consensus between teacher 𝒦 and integrated student knowledge Y~′ have higher scores, such as those on the chair legs shown in Fig. 3, and those with low consensus are suppressed by the reduced scores, such as those on the chair arm (B-VLM) and back (P-VLM) in Fig. 3. Note that for simplicity, we only display two scores in each shape of Fig. 3 and show the average pixel-wise scores in P-VLM. To justify that the student network has thoroughly integrated the teacher’s knowledge, from initial knowledge Y~ to integrated knowledge Y~′, we track the moving average of the loss value for every epoch and see if the value in a subsequent epoch is lower than a specified threshold τ. Afterward, the student network continues to learn with the re-scored knowledge 𝒦′ by minimizing the loss in Eq. 3 with C being replaced by Cb⁢d, and produces the final part segmentation predictions Y~f.

Refer to caption

Figure 3:Given the VLM output of view v, Bv or Sv, we display the confidence scores before (C) and after (Cb⁢d) performing backward distillation via Eq. 4, with Y and M obtained via Eq. 2. With backward distillation, inaccurate VLM predictions have lower scores, such as the arm box in B-VLM with the score reduced from 0.7 to 0.1, and vice versa.

3.4Test-time alignment

In general, our method performs the alignment with a shape collection before the student network is utilized to carry the 3D shape part segmentation. If such a pre-alignment is not preferred, we provide a special case of our method, test-time alignment (TTA), where the alignment is performed for every single shape in test time. To maintain the practicability, TTA needs to achieve a near-instantaneous completion. To that end, TTA employs a readily used 3D encoder, e.g., pre-trained Point-M2AE [47], freezes its weights, and only updates the learnable parameters in the distillation head, which significantly fastens the TTA completion.

3.5Implementation Details

The proposed framework is implemented in PyTorch [32] and is optimized for 25 epochs via Adam optimizer [19] with a learning rate and batch size of 0.001 and 16, respectively. Unless further specified, the student network employs Point-M2AE [47] pre-trained in a self-supervised way on the ShapeNet55 dataset [6] as the 3D extractor, freezes its weights, and only updates the learnable parameters in the distillation head. A multi-layer perceptron consisting of 4 layers, with ReLU activation [2], is adopted for the distillation head. To fairly compare with the competing methods [48, 52, 24, 1], we follow their respective settings, including the used text prompts and the 2D rendering. Their methods render each shape into 10 multi-view images, either from a sparse point cloud [48, 52], a dense point cloud [24], or a mesh shape [1]. Lastly, we follow [18, 38] to specify a small threshold value, τ=0.01 in our backward distillation, and apply class-balance weighting [9] during the alignment, based on the VLM predictions in the zero-shot setting, with additional few-shot labels in the few-shot setting.

Table 1:Zero-shot segmentation on the ShapeNetPart dataset, reported in mIoU (%).*

VLMData typeMethodAirplaneBagCapChairEarphoneGuitarKnifeLaptopMugTableOverall
CLIP [37]point cloudPointCLIP [48]22.044.813.418.728.322.724.822.948.645.431.0
PointCLIPv2 [52] 35.753.353.151.948.159.166.761.845.549.848.4
OpenScene [34]34.463.856.159.862.669.370.165.451.060.452.9
Ours (TTA)37.562.655.556.455.671.776.967.453.562.953.8
Ours (Pre)40.675.667.265.066.385.879.892.683.168.763.9
GLIP [22]point cloudOurs (TTA)57.362.756.274.245.860.678.585.782.562.954.7
Ours (Pre)69.370.167.986.551.276.885.791.985.679.664.1
meshSATR [1]32.232.121.825.219.437.740.150.476.422.432.3
Ours (TTA)53.261.844.966.443.050.766.368.383.958.849.5
Ours (Pre)64.864.451.067.448.364.870.083.186.579.356.3

*Results for other categories, including those of Table 2 and Table 3, can be seen in the supplementary material.

Table 2:Zero-shot segmentation on the PartNetE dataset, reported in mIoU (%).

VLMData typeMethodBottleCartChairDisplayKettleKnifeLampOvenSuitcaseTableOverall
GLIP [22]point cloudPartSLIP [24]76.387.760.743.820.846.837.133.040.247.727.3
Ours (TTA)77.488.574.150.524.259.258.834.243.250.239.9

4Experiments

4.1Dataset and evaluation metric

We evaluate the effectiveness of our method on two main benchmark datasets, ShapeNetPart [44] and PartNetE [24]. While ShapeNetPart dataset contains 16 categories with a total of 31,963 shapes, PartNetE contains 2,266 shapes, covering 45 categories. The mean intersection over union (mIoU) [30] is adopted to evaluate the segmentation results on the test-set data, measured against the ground truth label.

4.2Zero-shot segmentation

To compare with the competing methods [48, 52, 1, 24], we adopt each of their settings and report their mIoU performances from their respective papers. Specifically, for P-VLM, we follow PointCLIP [48] and PointCLIPv2 [52] to utilize CLIP [37] with ViT-B/32 [10] backbone and use their pipeline to obtain the pixel-wise predictions from CLIP. For B-VLM, a GLIP-Large model [22] is employed in our method to compare with PartSLIP and SATR which also use the same model. While most competing methods report their performances on the ShapeNetPart dataset, PartSLIP evaluates its method on the PartNetE dataset. In addition, we compare with OpenScene [34] by extending it for 3D part segmentation and use the same Point-M2AE [47] backbone and VLM CLIP for a fair comparison.

Accordingly, we carry out the comparison separately to ensure fairness, based on the employed VLM model and the shape data type, i.e., point cloud or mesh data, as shown in Tables 1 and 2. In Table 1, we provide two versions of our method, including test-time alignment (TTA) and pre-alignment (Pre) with a collection of shapes from the train-set data. Note that in the Pre version, our method does not use any labels (only unlabeled shape data are utilized).

First, we compare our method to PointCLIP and PointCLIPv2 (both utilize CLIP) on the zero-shot segmentation for the ShapeNetPart dataset, as can be seen in the first part of Table 1. It is evident that our method for both TTA and pre-alignment versions achieves substantial improvements in all categories. For the overall mIoU, calculated by averaging the mIoUs from all categories, our method attains 5.4% and 15.5% higher mIoU for TTA and pre-alignment versions, respectively, compared to the best mIoU from the other methods. Such results reveal that our method which simultaneously exploits appearance and geometric features can better aggregate the 2D predictions for 3D part segmentation than directly averaging the corresponding 2D predictions as in the competing methods, where geometric evidence is not explored. We further compare with OpenScene [34] under the same setting as ours (Pre) and our method substantially outperforms it. One major reason is that our method can handle the inconsistency of VLM predictions (issue 𝓘2) better by backward distillation.

Next, as shown in the last three rows of Table 1, we compare our method to SATR [1] that works on mesh data shapes. To obtain the mesh face predictions, we propagate the point predictions via a nearest neighbors approach as in [17], where each face is voted from its five nearest points. Our method achieves 17.2% and 24% higher overall mIoU than SATR for TTA and pre-alignment versions, respectively. Then, we compare our method with PartSLIP [24] in Table 2 wherein only results from TTA are provided since the PartNetE dataset does not provide train-set data. One can see that our method consistently obtains better segmentations, with 12.6% higher overall mIoU than PartSLIP.

In PartSLIP and SATR, as GLIP is utilized, the uncovered 3D regions (issue 𝓘𝟏) could be intensified by possible undetected areas, and the negative transfer (issue 𝓘𝟐) may also be escalated due to semantic leaking, where the box predictions cover pixels from other semantics. On the other hand, our method can better alleviate these issues, thereby achieving substantially higher mIoU scores. In our method, the pre-alignment version achieves better segmentation results than TTA. This is expected since in the pre-alignment version, the student network can distill the knowledge from a collection of shapes, instead of individual shape.

Refer to caption

Figure 4:Visualization of the zero-shot segmentation results, drawn in different colors, on the ShapeNetPart dataset. We render PartSLIP results on the ShapeNetPart data to have the same visualization of shape inputs. While occluded and undetected regions (issue 𝓘𝟏) are shown with black and gray colors, respectively, the blue and red arrows highlight several cases of issues 𝓘𝟐 and 𝓘𝟑.

Besides foregoing quantitative comparisons, a qualitative comparison of the segmentation results is presented in Fig. 4. It is readily observed that the competing methods suffer from the lack of 3D segmentation for the uncovered regions (issue 𝓘𝟏) caused by either occlusion or not being covered by any bounding box, drawn with black and gray colors, respectively. Moreover, these methods may also encounter negative transfers caused by inaccurate VLM outputs (issue 𝓘𝟐), such as those pointed by blue arrows, with notably degraded outcomes in SATR due to semantic leaking. Nonetheless, our method performs cross-modal distillation and alleviates these two issues, as can be seen in Fig. 4. In addition, due to a direct transfer of 2D predictions to 3D space which relies on each independent shape as in the competing methods, erroneous 2D predictions will just remain as incorrect 3D segmentation (issue 𝓘𝟑), such as the missed detected chair arms and guitar heads pointed by red arrows. Our method also addresses this issue, by exploiting geometrical features across multiple shapes.

Table 3:Few-shot segmentation on the PartNetE dataset, reported in mIoU (%).

MethodBottleCartChairDisplayKettleKnifeLampOvenSuitcaseTableOverall
Non-VLM-basedPointNet++ [35]27.011.642.230.228.622.210.519.43.37.320.4
PointNext [36] 67.647.765.153.760.659.755.436.814.522.140.6
ACD [12] 22.431.539.029.240.239.613.78.913.213.523.2
Prototype [51] 60.136.870.867.362.750.438.236.535.525.744.3
Point-M2AE [47]72.474.583.474.364.368.057.653.357.533.656.4
VLM-based(GLIP [22]) PartSLIP [24]83.488.185.384.877.065.260.073.570.442.459.4
Ours84.690.188.487.478.671.469.272.873.463.365.9

4.3Few-shot segmentation

We further demonstrate the effectiveness of our method in a few-shot scenario by following the setting used in PartSLIP [24]. Specifically, we employ the fine-tuned GLIP model [22] provided by PartSLIP via 8-shot labeled shapes of the PartNetE dataset [44] for each category. In addition to the alignment via Eq. 3, we ask the student network to learn parameters that minimize both Eq. 3 and a standard cross-entropy loss for segmentation on the 8 labeled shapes.

As shown in Table 3, the methods dedicated to few-shot 3D segmentation, ACD [12] and Prototype [51], are adapted to PointNet++ [51] and PointNext [36] backbones, respectively, and can improve the performances (on average) of these backbones. PartSLIP, on the other hand, leverages multi-view GLIP predictions for 3D segmentation and further improves the mIoU, but there are still substantial performance gaps compared to our method which distills the GLIP predictions instead. We also present the results from fine-tuning Point-M2AE with the few-shot labels, which shows lower performances than ours, highlighting the significant contribution of our distillation framework. For more qualitative results, see the supplementary materials.

4.4Leveraging generated data

Since only unlabeled 3D shape data are required for our method to perform cross-modal distillation, existing generative models [31, 33] can facilitate an effortless generation of 3D shapes, and the generated data can be smoothly incorporated into our method. Specifically, we first adopt DiT-3D [31] which is pre-trained on the ShapeNet55 dataset [6] to generate point clouds of shapes, 500 shapes for each category, and further employ SAP [33] to transform the generated point clouds into mesh shapes. These generated mesh shapes can then be utilized in our method for distillation. Table 4 shows the results evaluated on the test-set data of ShapeNetPart [44] and COSEG [42] datasets for several shape categories, using GLIP as the VLM.

One can see that with distilling from the generated alone, our method already achieves competitive results on the ShapeNetPart dataset compared to distilling from the train-set data. Since the generated data via DiT-3D is pre-trained on the ShapeNet55 dataset which contains the ShapeNetPart data, we also evaluate its performance on the COSEG dataset to show that such results can be well transferred to shapes from another dataset. Finally, Table 4 (the last row) reveals that using generated data as a supplementary knowledge source can further increase the mIoU performance. Such results suggest that if a collection of shapes is available, generated data can be employed as supplementary knowledge sources, which can improve the performance. On the other hand, if a collection of shapes does not exist, generative models can be employed for shape creation and subsequently used in our method as the knowledge source.

4.5Ablation studies

Proposed components.

We perform ablation studies on the proposed components, and the mIoU scores in 2D11Calculated between the VLM predictions and their corresponding 2D ground truths projected from 3D, and weighted by the confidence scores. See supplementary material for the details. and 3D spaces on three categories of the ShapeNetPart dataset are shown in (1) to (9) of Table 5. In (1), only GLIP box predictions are utilized to get 3D segmentations, i.e., part labels are assigned by voting from all visible points within the multi-view box predictions. These numbers serve as baselines and are subject to issues 𝓘𝟏∼𝓘𝟑. In (2) and (3), 3D segmentations are achieved via forward distillation from the GLIP predictions to the student network using Eq. 3, for test-time alignment (TTA) and pre-alignment (Pre) versions, resulting in significant improvements compared to the baselines, with more than 10% and 14% higher mIoUs, respectively. Such results demonstrate that the proposed cross-modal distillation can better utilize the 2D multi-view predictions for 3D part segmentation, alleviating 𝓘𝟏∼𝓘𝟑.

Table 4:Segmentation mIoU (%) by leveraging generated data.

Distilled dataShapeNetPart [44]COSEG [42]
AirplaneChairGuitarChairGuitar
Train-set (baseline)69.386.276.896.468.0
Gen. data69.085.375.696.167.5
Gen. data & train-set70.888.478.397.470.2

Table 5:Ablation study on the proposed method.

NoVLMPreBDStudentnetworkAirplaneChairKnife
2D3D2D3D2D3D
(1)GLIP[22]42.840.260.260.153.657.2
(2)42.856.260.273.553.677.6
(3)42.864.360.284.253.684.5
(4)44.357.361.774.254.878.5
(5)48.269.363.286.555.085.7
(6)exclude 𝓘𝟏48.262.563.280.455.081.2
(7)w/o pretrain48.269.163.286.755.085.3
(8)CLIP[37]34.638.450.463.666.877.4
(9)37.840.654.265.068.478.9

We further add backward distillation (BD) in (4) and (5), which substantially improves the knowledge source in 2D, e.g., from 42.8% to 48.2% for the airplane category in the pre-alignment version, and subsequently enhances the 3D segmentation. We observe a higher impact (improvement) on the pre-alignment compared to TTA versions, i.e., in (4) and (5), as the student network of the former can better integrate the knowledge from a collection of shapes. A similar trend of improvement can be observed for a similar ablation performed with CLIP [37] used as the VLM (in (8) and (9)).

In (6), we exclude our method’s predictions for those uncovered points to simulate issue 𝓘𝟏, and the reduced mIoUs compared to (5), e.g., from 86.5% to 80.4% for the chair category, reveal that our method can effectively alleviate issue 𝓘𝟏. Finally, instead of using pre-trained weights of Point-M2AE [47] and freezing them as the 3D decoder as in (5), we initialize these weights (by default PyTorch [32] initialization) and set them to be learnable as in (7). Both settings produce comparable results (within 0.4%). The main purpose of using the pre-trained weights and freezing them is for faster convergence, especially for the test-time alignment purpose. Please refer to the supplementary material for the comparison of convergence curves.

Number of views.

We render V=10 multi-view images for each shape input in our main experiment, and Fig. 5 (left) shows the mIoU scores with different values of V. A substantial drop is observed when utilizing V<6, and small increases are obtained when a larger V is used.

Various shape types for 2D multi-view rendering.

We render 10 multi-view images from various shape data types, i.e., (i) gray mesh, (ii) colored mesh, (iii) dense colored point cloud (∼300k points) as used in PartSLIP [24], and (iv) sparse gray point cloud (2,048 points) using PyTroch3D [16] and the rendering method in [52] to render (i)-(iii) and (iv), respectively. Fig. 5 (right) summarizes such results on the ShapeNetPart dataset, with GLIP used as the VLM. Note that the first three shape types produce comparable mIoUs with slightly higher scores when colored mesh or dense colored point cloud is utilized. When sparse gray point cloud data type is used, a mild mIoU decrease is observed. Please refer to the supplementary material to see more results for (i)-(iv).

Refer to caption

Figure 5:Ablation study on number of views and various shape types for 2D multiview rendering on the ShapeNetPart dataset.

Limitation.

The main limitation of our method is that the segmentation results are impacted by the quality of the VLM predictions, where VLMs are generally pre-trained to recognize object- or sample-level categories (not part-level of object categories). For instance, GLIP can satisfactorily locate part semantics for the chair category but with lower qualities for the earphone category, while CLIP can favorably locate part semantics for the earphone category but with less favorable results for the airplane category. Hence, exploiting multiple VLMs can be a potential future work. Nonetheless, the proposed method which currently employs a single VLM model can already boost the segmentation results significantly compared to the existing methods.

5Conclusion

We present a cross-modal distillation framework that transfers 2D knowledge from a vision-language model (VLM) to facilitate 3D shape part segmentation, which generalizes well to both VLM with bounding-box and pixel-wise predictions. In the proposed method, backward distillation is introduced to enhance the quality of 2D predictions and subsequently improve the 3D segmentation. The proposed approach can also leverage existing generative models for shape creation and can be smoothly incorporated into the method for distillation. With extensive experiments, the proposed method is compared with existing methods on widely used benchmark datasets, including ShapeNetPart and PartNetE, and consistently outperforms existing methods with substantial margins both in zero-shot and few-shot scenarios on 3D data in point clouds or mesh shapes.

Acknowledgment.

This work was supported in part by the National Science and Technology Council (NSTC) under grants 112-2221-E-A49-090-MY3, 111-2628-E-A49-025-MY3, 112-2634-F-006-002 and 112-2634-F-A49-007. This work was funded in part by MediaTek and NVIDIA.

\thetitle

Supplementary Material

Table 6:Zero-shot segmentation on all 16 categories of the ShapeNetPart dataset [44], reported mIoU (%). In this table, TTA and Pre denote the test-time alignment and pre-alignment versions of our method, while VLM stands for vision-language model (see main paper for details).

CategoryVLM - CLIP [37]VLM - GLIP [22]
point cloud inputpoint cloud inputmesh input
PointCLIP [48] PointCLIP v2 [52]Ours (TTA)Ours (Pre)Ours (TTA)Ours (Pre)SATR [1]Ours (TTA)Ours (Pre)
Airplane22.035.737.540.657.369.332.253.264.8
Bag44.853.362.675.662.770.132.161.864.4
Cap13.453.155.567.256.267.921.844.951.0
Car30.434.536.441.232.439.222.330.232.3
Chair18.751.956.465.074.286.525.266.467.4
Earphone28.348.155.666.345.851.219.443.048.3
Guitar22.759.171.785.860.676.837.750.764.8
Knife24.866.776.979.878.585.740.166.370.0
Lamp39.644.745.863.134.543.521.630.535.2
Laptop22.961.867.492.685.791.950.468.383.1
Motorbike26.331.433.438.230.637.825.428.832.5
Mug48.645.553.583.182.585.676.483.986.5
Pistol42.646.148.255.839.648.534.137.440.9
Rocket22.746.749.349.536.848.933.241.145.3
Skateboard42.745.847.749.234.243.522.326.234.5
Table45.449.862.968.762.979.622.458.879.3
Overall31.048.453.863.954.764.132.349.556.3

Table 7:Segmentation on all 45 categories of the PartE dataset [24], reported in mIoU (%). In this table, TTA denotes our method with test-time alignment (see main paper for details).

CategoryZero-shotFew-shot
PartSLIP [24] Ours (TTA)PartSLIP [24]Ours
Bottle76.377.483.484.6
Box57.569.784.587.9
Bucket2.016.836.550.7
Camera21.429.458.360.1
Cart87.788.588.190.1
Chair60.774.185.388.4
Clock26.723.637.637.2
Coffee machine25.426.837.840.2
Dishwasher10.318.662.560.2
Dispenser16.511.473.874.7
Display43.850.584.887.4
Door2.741.140.855.5
Eyeglasses1.859.788.391.1
Faucet6.833.371.473.5
Folding chair91.789.786.390.7
Globe34.890.095.797.4
Kettle20.824.277.078.6
Keyboard37.338.553.670.8
Kitchenpot4.736.869.669.7
Knife46.859.265.271.4
Lamp37.158.866.169.2
Laptop27.037.129.740.0
Lighter35.437.364.764.9
Microwave16.623.242.743.8
Mouse27.018.644.046.9
Oven33.034.273.572.8
Pen14.615.771.574.4
Phone36.137.348.450.8
Pliers5.451.933.290.4
Printer0.83.34.36.3
Refrigerator20.225.255.858.1
Remote11.513.238.340.7
Safe22.418.232.258.6
Scissors21.864.460.368.8
Stapler20.965.184.886.3
Storage furniture29.530.653.656.5
Suitcase40.243.270.473.4
Switch9.530.359.460.7
Table47.750.242.563.3
Toaster13.811.460.058.7
Toilet20.622.553.855.0
Trash can30.149.322.370.0
Usb10.939.154.464.3
Washing machine12.512.953.555.1
Window5.245.375.478.1
Overall27.339.959.465.9

63D segmentation scores for full categories

We provide 3D segmentation scores, reported in mIoU, for full categories of the ShapeNetPart [44] and PartE [24] datasets in Tables 6 and 7, respectively. Table 6 is associated with Table 1 in the main paper, while Table 7 is associated with Tables 2 and 3. In Table 6, 16 categories of the ShapeNetPart dataset are reported, while 45 categories of the PartE dataset are presented in Table 7. For the tables, it is readily observed that the proposed method, PartDistill, attains substantial improvements compared to the competing methods [48, 52, 1, 24] in most categories.

7Evaluating 2D predictions

In the ablation studies of our method’s components presented in Table 5, we provide mIoU scores in 2D space, mIoU2D, to evaluate the quality of the 2D predictions measured against the 2D ground truths before and after performing backward distillation which re-scores the confidence scores of each knowledge unit. Here, the 2D ground truths are obtained by projecting the 3D mesh (faces) part segmentation labels to 2D space using the camera parameters utilized when performing 2D multi-view rendering.

We first explain how to calculate the mIoU2D if a vision-language model (VLM) which outputs pixel-wise predictions (P-VLM) is used in our method and later explain if a VLM which outputs bounding-box predictions (B-VLM) is employed. In each view, let {si}iρ be the prediction maps (see Eq. 1 in the main paper) of P-VLM with Ci denoting the confidence score of si and 𝒢 be the corresponding 2D ground truth. We first calculate the IoU2D for each semantic part r as,

IoU2D⁢(r)=ℐ⁢(r)ℐ⁢(r)+λ⁢(r)+γ⁢(r)(5)

where

ℐ⁢(r)=∑i∈ϕ⁢(r)Avg⁢(Ci)⁢(si∩𝒢r),(6)
λ⁢(r)=Avg⁢(Cϕ⁢(r))⁢((⋃i∈ϕ⁢(r)si)∉𝒢r),(7)

and

γ⁢(r)=𝒢r∉⋃i∈ϕ⁢(r)si,(8)

with ϕ⁢(r) denoting a function returning indices of {si}i=1ρ that predict part r, “Avg” denoting an averaging operation and 𝒢r indicating the ground truth of part r.

While Eq. 6 represents the intersection of the pixels between the 2D predictions and the corresponding ground truths, weighted by their confidence scores, Eq. 7 tells the union of the 2D predictions pixels that do not intersect with the corresponding ground truths, which is weighted by the average of all confidence scores associated with part r. As for Eq. 8, it tells the ground truth pixels that do not intersect with the corresponding 2D predictions union. We then calculate the IoU2D score for each semantic part r in every v view and compute the mean of them as mIoU2D.

Note that we involve the confidence scores as weights to calculate the mIoU2D. This allows us to compare the quality of the 2D predictions before and after applying backward distillation, using the confidence scores before and after this process. To compute the mIoU2D scores when a B-VLM is used in our method, we can use Eq. 5 with si in Eq. 6∼ Eq. 8 being replaced by ℱ⁢(bi), where ℱ denotes an operation excluding the background pixels covered in bi.

8Additional visualizations

8.1Visualization of few-shot segmentation

Refer to caption

Figure 6:Visualization of few-shot segmentation results derived using our method on the PartE dataset [24]. Each semantic part is drawn in different colors.

In Figure 6, we present several visualizations for few-shot segmentation obtained via our method, associated with Table 3 in the main paper. Following the prior work [24], 8-shot labeled shapes are utilized to carry the few-shot segmentation. From the figure, it is evident that our method successfully achieves satisfactory segmentation results.

8.2Convergence curves

In the ablation studies presented in Table 5, we compare two approaches used in the 3D encoder of our student network. First, we employ a pre-trained PointM2AE [47] backbone, freeze the weights and only update the learnable parameters in the student network’s distillation head. Second, we utilize a PointM2AE backbone with its weights initialized by PyTorch [32] default initialization and set them to be learnable, together with the parameters in the distillation head. From the table, we observe comparable results between both settings (see rows (5) and (7) for the first and second approaches respectively).

We then visualize the convergence curves in both settings, as depicted in Figure 7. From the figure, it can be seen that the loss in the first approach converges significantly faster than in the second approach. As a result, the first approach also starts to perform backward distillation in a substantially earlier epoch than the second one.

Refer to caption

Figure 7:Convergence curves of our method’s losses during optimization epochs. While the first approach employs a pre-trained PointM2AE [47] model and freezes its weights, the second approach initializes the Point2MAE’s weights from scratch and sets them to be learnable.

8.32D rendering from various shape types

We present several 2D rendering images from various shape types, including (i) gray mesh, (ii) colored mesh, (iii) dense colored point cloud, and (iv) sparse gray point cloud, which can be seen in Figure 8. While PartSLIP [24] renders the multi-view images using type (iii), SATR [1] uses type (i). As for PointCLIP [48] and PointCLIPv2 [52], they use type (iv) to render their multi-view images.

Refer to caption


http://www.ppmy.cn/server/159695.html

相关文章

SSM课设-酒店管理系统功能设计

【课设者】SSM课设-酒店管理系统 分为用户端管理员端 技术栈: 后端: Spring Spring MVC MyBatis Mysql JSP 前端: HtmlCssJavaScriptAjax 功能: 用户端主要功能包括&#xff1a; 登录注册 客房预订 客房评论 首页 管理员端主要功能包括&#xff1a; 会员信息管理 客房信息…

迅翼SwiftWing | ROS 固定翼开源仿真平台正式发布!

经过前期内测调试&#xff0c;ROS固定翼开源仿真平台今日正式上线&#xff01;现平台除适配PX4ROS环境外&#xff0c;也已实现APROS环境下的单机飞行控制仿真适配。欢迎大家通过文末链接查看项目地址以及具体使用手册。 1 平台简介 ROS固定翼仿真平台旨在实现固定翼无人机决策…

大数据-236 离线数仓 - 会员活跃度 WDS 与 ADS 导出到 MySQL 与 广告业务 需求分析

点一下关注吧&#xff01;&#xff01;&#xff01;非常感谢&#xff01;&#xff01;持续更新&#xff01;&#xff01;&#xff01; Java篇开始了&#xff01; 目前开始更新 MyBatis&#xff0c;一起深入浅出&#xff01; 目前已经更新到了&#xff1a; Hadoop&#xff0…

时序自适应卷积 (Temporally-Adaptive Convolutions, TAdaConv)详解及代码复现

背景与动机 在深度学习领域,时序数据处理一直是一个重要的研究方向。近年来,随着视频分析、语音识别等应用的快速发展,如何有效利用时序信息成为了研究热点。然而,传统的卷积神经网络(CNN)在处理时序数据时存在一些局限性,主要体现在以下几个方面: 时序信息利用不足 :…

Python爬虫学习前传 —— Python从安装到学会一站式服务

早上好啊&#xff0c;大佬们。我们的python基础内容的这一篇终于写好了&#xff0c;啪唧啪唧啪唧…… 说实话&#xff0c;这一篇确实写了很久&#xff0c;一方面是在忙其他几个专栏的内容&#xff0c;再加上生活学业上的事儿&#xff0c;确实精力有限&#xff0c;另一方面&…

青少年编程与数学 02-007 PostgreSQL数据库应用 03课题、安装pgAdmin

青少年编程与数学 02-007 PostgreSQL数据库应用 03课题、安装pgAdmin 一、pgAdmin二、安装Windows系统安装pgAdminLinux系统安装pgAdmin 三、语言四、配置1. 设置服务器连接2. 配置pgAdmin界面3. 配置SQL编辑器4. 配置浏览器树5. 安全性配置6. 导入和导出数据 课题摘要:本课题介…

vue3+echarts+DataV实现省市县地图

地图json数据从这里下面 DataV.GeoAtlas地理小工具系列 1.效果图 2.html <div class"map"><div style"width: 750px;height: 584px;" id"myMap"></div></div> 3.javaScript <script setup lang"ts">…

element表格滚动错位问题,使用uniapp写的项目

element表格设置滚动条滚动到底错位_element表格滚动条无法滚动到最后-CSDN博客 参考上面的博主写的 ::v-deep ::-webkit-scrollbar {display: block !important;width: 8px !important;height: 8px !important;background: rgb(241, 241, 241) !important;-webkit-appearance…