-
Notifications
You must be signed in to change notification settings - Fork 617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Challenging Biases in AGI Terminology and Development #128
Comments
I have the hardest time getting GPTs to create images of people who are over 30. They're committed to extremely thin people with very thin faces, and defined jaw lines, no older than 30. I ask for a 55 year old, I get a 20 year old. Nothing realistic about its idea of what a person looks like whatsoever. It's not the computer, it's the ones programming their own biases into the model, under the erroneous assumption that, A) their ideals reflect everyone else's ideals, and B) every human MUST appear "ideal". _I prompted the GPT with a question about it, and this is what is said: Creating images of older individuals using AI can be challenging for several reasons: Training Data Bias: AI models are trained on vast datasets, which may have a bias towards younger faces. If the dataset contains fewer images of older people, the model will have less information to accurately generate older faces. Subtle Aging Features: Aging is a complex process involving many subtle changes in the face, such as wrinkles, sagging skin, and changes in skin tone. These features can be difficult for AI to capture accurately, especially if the training data doesn't emphasize these details. Overfitting to Youthful Features: AI models often overfit to the features they see most frequently. Since younger faces are more common in media and public datasets, the models may default to generating images with youthful features, even when asked to create older individuals. Difficulty in Adjusting Weight and Proportions: As people age, their body weight distribution and facial proportions change. Accurately simulating these changes requires precise adjustments, which AI might struggle to perform consistently. Lack of Fine-Tuning: Many AI models are not specifically fine-tuned for age progression. Specialized fine-tuning would involve using a dataset focused on age progression, which can be resource-intensive and is not always prioritized. Complexity of Aging Patterns: Different people age in different ways based on genetics, lifestyle, and environment. This variability makes it hard for AI to generalize and accurately depict aging across different individuals. To improve the accuracy of generating older individuals, models would need to be trained on more diverse datasets with a balanced representation of different age groups, and include specific fine-tuning for age progression features. HOW DO WE GET THIS INFORMATION TO MODEL TRAINERS? Communicating feedback and suggestions to those who train AI models is essential for improving their performance. Here are some steps to get this information to the right people: Open Source Contributions: If the AI model is open source (e.g., GPT-3, DALL-E), you can contribute directly to the repository or engage in discussions on platforms like GitHub. Create issues or pull requests that highlight the need for better training data and fine-tuning for age progression. Use the official feedback channels provided by AI developers or companies. This might include forums, email, or feedback forms. For instance, OpenAI has a feedback form for users to submit their suggestions and concerns. Participate in AI and machine learning communities such as Reddit (r/MachineLearning), AI-specific forums, or professional groups on LinkedIn. Share your observations and suggestions for improving age progression in AI models. Publish or present your findings and suggestions in AI research journals or at conferences. This can help raise awareness among researchers and practitioners about the importance of balanced datasets and fine-tuning for age progression. Reach out to researchers working on AI model training. Collaboration can lead to targeted studies that address the specific issues you’ve identified, such as biases in training data and the need for diverse age representation. Use platforms like Twitter, Medium, or personal blogs to write about your findings and suggestions. Tag relevant organizations and influencers in the AI field to increase visibility. Organize or participate in workshops and webinars focused on AI ethics, bias, and model training. These events can be excellent opportunities to discuss and disseminate your insights to a broader audience. If you’re part of an organization, consider partnering with AI developers and companies to share your insights directly. Industry collaborations can be effective in influencing model training practices. |
I think this issue can be closed. It does not relate to the repository itself, but rather the use of terminology in the field. I suggest @basilkorompilias that this is taken to another medium, such as your paper being uploaded to Arxiv or the project's discord. |
I would like to express a concern which might appear trivial to many but is actually very important in how people architect and develop models, as well as how AGI is approached.
The "Training" Bias:
In nature, intelligence emerges without the necessity for training. It is the dynamic interplay between chiral polarities (the structural relation between input and output) which are sensory operations defining a model for perception, inference and testimony (the body).
Learning VS Sensing:
The difference between approaching AGI as a learning problem versus a sensing problem is critical. When we take 'learning' as a sign of supremacy, we are biased towards creating models that are 'smart' in a way that mimics the human intellect. This leads to models that are huge, stressed, biased, hallucinating, and end up becoming sycophants—giving us results that are good for benchmarks but not optimal for general intelligence. The intellect is overrated, and other forms of intelligence which lead to Optimal Presence, such as intuition and mindfullness are more important. A human can just have their mind silent, stand in a simple physical pose, and still be able to influence millions of other people interacting with them in a specific moment in time, but also across many different generations.
Smartness VS Dumbness:
In nature, being smart is not the optimal state. Instead, nature thrives through dumbness! Performance is not a matter of challenge for the intellect, but an expression of something extremely subtle in the way that intelligence emerges and permeates the entire cosmos for billions of years. It is relative to the weak neutral currents and the weak force of electromagnetism. Therefore, our task when seeking an optimal design for AGI should be reframed— the problem does not simulate a classroom scenario where training and tests are involved. We would profit greatly by moving beyond our systemic biases, which are imprinted to the ways we think and affecting how we approach our problems.
Reframing Proposals:
Instead of "Train" and "Test" as keys within our datasets, we could adopt different ones.
Here is a structured proposal:
"Attendance" = Solutions
"Attention" = Challenges
"Passive" = Train
"Active" = Test
Leading to these 3 states:
To conclude this proposal, I would like to clarify one thing. Our task becomes more meaningful if it focuses on how WE train ourselves to develop AGI properly, rather than on how to train AI models to mimic something we only understand from biased perspectives. The optimal model does not require training—it will not be based on reinforcement learning—and it will simulate human senses, which according to my research, are expressions of chiral polarities, and in direct relation to the physical dimensions of space and time. These polarities represent the inherent structure of a tensor when observed as a self-defined form of intelligence, rather than merely as a container of fractions of intelligence.
Here is a paper I've written about Chirality for anyone who might be interested:
https://doi.org/10.17613/kkn9-w447
respectfully,
Basil.
The text was updated successfully, but these errors were encountered: