diff --git a/SCORES-AI-Alignment.md b/SCORES-AI-Alignment.md new file mode 100644 index 0000000..303333f --- /dev/null +++ b/SCORES-AI-Alignment.md @@ -0,0 +1,10 @@ +# AI/ML Alignment in SCORES Labs Present and Future +### STATUS: DRAFT +## Basic Principles for Protection of Humanity and The Environment +- Technology we build and utilize needs to be understand in terms of impact +- Autonomous functions need controls and failsafe mechanisms +- Training Data ultimately belongs to Creators of that data, regardless of whether is was intentionally or consciously generated (e.g. physical activity data) + +## Best Practices +- Product Goals involving autonomous functions, especially leveraging neural networks and "statistically trained" engines of should be reviewed for Alignment using well-planned Quality Assurance. +- AI Systems must be monitored, and interruptable.