From the course: Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes
Unlock this course with a free trial
Join today to access over 24,900 courses taught by industry experts.
Distributional shifts and incomplete testing
From the course: Security Risks in AI and Machine Learning: Categorizing Attacks and Failure Modes
Distributional shifts and incomplete testing
- [Instructor] Accounting for context is something that people do innately and intuitively. For example, when purchasing an electrical appliance, you probably look for the UL or Underwriter's Laboratory certification to make sure the product was safety tested. You also have context that helps keep you safe. For example, you know not to drop electrical devices that were safety tested in water while they're plugged in. The contextual element is important to safety. AI systems need to be designed with context in mind and used in the same environments that they were trained in. Otherwise, they're vulnerable to unintentional failures caused by distributional shifts and incomplete testing. Distributional shifts occur when there are mismatches between the data the system was trained on and the data it encounters during deployment. When these shifts become too wide, when the difference between the training data and the production data is too high, the performance and accuracy of the system…