The thesis focuses on improving the reliability of deep learning models, particularly in detecting out-of-distribution (OoD) samples, which are data points that differ from the training data and can lead to incorrect predictions. This is especially important in critical fields like healthcare and autonomous vehicles, where errors can have serious consequences. The research leverages vision foundation models (VFMs) like CLIP and DINO, which have revolutionized computer vision by enabling learning from limited data. The proposed work aims to develop methods that maintain the robustness of these models during fine-tuning, ensuring they can still effectively detect OoD samples. Additionally, the thesis will explore solutions for handling changing data distributions over time, a common challenge in real-world applications. The expected results include new techniques for OoD detection and adaptive methods for dynamic environments, ultimately enhancing the safety and reliability of AI systems in practical scenarios.