Alan, thanks for reading my encyclopedic post!
So much to unpack and I confess to still having so much to learn.
Despite the length I feel like I just scratched the surface.
I’d be delighted to participate in a series of regular conversations to dive deeper into the topics covered.
I too have an interest in exploring what training really entails. As educators this ought to be an area we have a vested interest in. I’m keen to engage, and share, and question together.
You ask “is it possible to break down everything you so carefully researched into some smaller questions we can chew on?” I’ll try. Here are a some of the questions I asked myself in exploring this work.
What is the role of “open” in AI?
What underlying source data is being used by AI as the basis for responding to human queries and prompts?
How much source data is being used? To what extent does more data lead to higher quality outputs?
Is the underlying source data being used as is or cleaned? Is the underlying source data free of bias and error?
Is it legal to train AI on data scraped from the web?
Is it morally and ethically fair to train AI on the work of others?
What open licenses are used in AI?
Are open source software licenses and Creative Commons licenses fit for purpose?
Do open licenses need to go beyond IP & copyright?
Do open licenses need to address moral and ethical downstream issues?
AI Tech Stack
What are the technology layers that make up AI?
How are AI technologies associated with each layer made available for use?
What open licenses are used at each layer of the AI tech stack?
What are AI models and why are they so important?
What are the different types of AI models? What is a Foundational model?
Why are models openly licensed? How are models openly licensed?
How are models trained?
What is the potential for uniquely custom models I train myself?
What are AI machine learning, deep learning and neural networks?
What are the learning theories, models and practices being used to train AI?
How does AI use reinforcement learning, supervised learning and self-supervised learning?
How do we ensure AI does not endanger safety and security?
Who is liable if AI generated output causes human harm?
How do we embed ethics into the AI developer and user community?
How do we ensure AI generates public good?
How can we avoid the appropriation of end user data for massive corporate gain?
What are the aspects of AI that could go wrong and how severe are the ramifications?
Should AI be a licensed, regulated industry?
How can regulation be done in a way that is internationally compatible?
What are the criteria that trigger regulation?
To what extent can the AI community self govern?
Geez, I should stop here. But as you can see there are lots of questions. I also note that I by no means answered all these questions. I welcome hearing what other questions you or others have.
Thanks for providing this forum as a space for discussion.