OVERVIEW
Artificial Intelligence systems – enabled by advancement in sensor and control technologies, artificial intelligence, data science, and machine learning – promise to deliver new and exciting applications to a broad range of industries. However, a fundamental trust in their application and execution must be established in order for them to succeed. People, by and large, do not trust a new entity or system in their environment without some evidence of trustworthiness. To trust an artificial intelligence system, we need to know which factors affect system behaviors, how those factors can be assessed and effectively applied for a given mission, and the risks assumed by trusting.
This 8-Hour recorded course aims to provide a foundation for building trust in artificial intelligence systems. A framework for evaluating trust is defined and highlights three perspectives - data, artificial intelligence algorithms, and cybersecurity. An overview of the state-of-the art in research, methods, and technologies for achieving trust in AI is reviewed along with current applications.
- Establishing Trust in Data
- Understanding Data Management
- Understanding AI Interpretability and Explainability
- Understanding Adversarial Robustness, including intersection of AI and Cybersecurity
- Understanding Monitoring & Control
- AIAA Member Price: $595 USD
- Non-Member Price: $795 USD
- AIAA Student Member Price: $395 USD
OUTLINE
- Introduction
- Motivation for establishing trust in AI systems
- Defining Trust
- Establishing a Trusted AI Framework
- Data
- Overview of Data Management
- Defining Data Governance
- Ensuring Fairness in AI Systems
- Assessing bias
- Techniques for removing bias
- Metrics for fairness
- Data poisoning
- Introduction to poisoning attacks
- Data poisoning defenses
- Building trust through reproducibility
- Handling domain shift
- Treating models as data
- Tools and best practices
- Adversarial Robustness
- Intersection of AI and cybersecurity
- Adversarial attacks
- Decision boundary attacks
- Adversarial patch attacks
- Defenses against adversarial attacks
- Adversarial training
- Exact methods
- Lower bound estimation
- Randomized smoothing
- Open areas of research
- Interpretability & Explainability
- Need for interpretability
- Tool and techniques
- Monitoring & Control
- Model acceptance testing
- Covariate | Target | Domain shifts
- Mechanisms for control of AI systems
- Dealing with confidence and uncertainty
- Conclusion
Mr. Andrew Brethorst is the Associate Department Director for the Data Science and AI Department at The Aerospace Corporation. Mr. Brethorst completed his undergraduate degree in cybernetics from UCLA, and later completed his master’s degree in computer science with a concentration in machine learning from UCI. Much of his work involves applying machine learning techniques to image exploitation, telemetry anomaly detection, intelligent artificial agents using reinforcement learning, as well as collaborative projects within the research labs.
Classroom hours / CEUs: 8 classroom hours / .8 CEU/PDH
Title | Credit(s) | |
---|---|---|
1 | ||
2 | ||
3 | ||
4 | ||
5 |