Medical image segmentation poses challenges due to domain gaps, data modality variations, and dependency on domain knowledge or experts, especially for low- and middle-income countries (LMICs). Whereas for humans, given a few exemplars (with corresponding labels), we are able to segment different medical images even without extensive domain-specific clinical training. In addition, current SAM-based medical segmentation models use fine-grained visual prompts, such as the maximum bounding rectangle generated from manually annotated lesion area segmentation masks, as bounding box prompt during the testing phase. However, in actual clinical diagnosis, no prior konwledge is avaiable for such fine-grained visual prompt. Our experimental results also reveal that previoums models nearly fail to predict when given coarser bbox prompts. Considering these drawbacks, we propose a domain-aware selective adaptation approach to adapt the general knowledge learned from a large model trained with natural images to the corresponding medical domains/modalities with access to only a few (e.g., less than 5) exemplars. Our method mitigates the aforementioned limitations, providing an efficient and LMICs-friendly solution. Extensive experimental analysis showcases the effectiveness of our approach, offering potential advancements in healthcare diagnostics and clinical applications in LMICs.