Anything ◼ Open-Vocabulary SAM: Segment and Recognize Twenty-thousand Classes Interactively ◼ Crowd-SAM:SAM as a smart annotator for object detection in crowded scenes ◼ PQ-SAM: Post-training Quantization for Segment Anything Model ◼ Pro2SAM: Mask Prompt to SAM with Grid Points for Weakly Supervised Object Localization ◼ CC-SAM: Enhancing SAM with Cross-feature Attention and Context for Ultrasound Image Segmentation ◼ CAT-SAM: Conditional Tuning for Few-Shot Adaptation of Segment Anything Model ◼ WPS-SAM: Towards Weakly-Supervised Part Segmentation with Foundation Models VP-SAM: Taming Segment Anything Model for Video Polyp Segmentation via Disentanglement and Spatio-temporal Side Network ◼ Domesticating SAM for Breast Ultrasound Image Segmentation via Spatial-frequency Fusion and Uncertainty Correction ◼ Segment and Recognize Anything at Any Granularity ◼ Better Call SAL: Towards Learning to Segment Anything in Lidar ◼ SAM-COD: SAM-guided Unified Framework for Weakly-Supervised Camouflaged Object Detection ◼ LiteSAM is Actually what you Need for segment Everything ◼ Learning to Adapt SAM for Segmenting Cross-domain Point Clouds ◼ SAM4MLLM: Enhance Multi-Modal Large Language Model for Referring Expression Segmentation •:今日発表します