Welcome to our repository dedicated to the collection of research and papers on cultural alignment alignment in Language Models. This compilation serves as a vital resource in understanding and ensuring the development of models that respect and integrate diverse cultural norms and values. (culturalalignment.ai)
This repository aims to:
- Serve as a centralized resource for researchers, students, and LLM enthusiasts.
- Enhance understanding of how LLMs aligns with various human values and cultural contexts.
- Stimulate discussion and promote further research in this critical area of LLM development.
Here, you'll find a curated list of academic papers, articles, and publications that explore the intersections of AI, Language Models, and cultural value alignment.
- (2-2024) Investigating Cultural Alignment of Large Language Models
- (2-2024) CIDAR: Culturally Relevant Instruction Dataset For Arabic
- (11-2023) CDEval: A Benchmark for Measuring the Cultural Dimensions of Large Language Models
- (8-2023) Group Preference Optimization: Few-Shot Alignment of Large Language Models
- (8-2023) Cultural Alignment in Large Language Models: An Explanatory Analysis Based on Hofstede's Cultural Dimensions
- (8-2023 [Unmasking Nationality Bias: A Study of Human Perception of Nationalities in AI-Generated Articles] (https://arxiv.org/abs/2308.04346)
- (5-2023) Training Socially Aligned Language Models on Simulated Social Interactions
- (5-2023) Having Beer after Prayer? Measuring Cultural Bias in Large Language Models
- (4-2023) Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models
- (4-2023) In Conversation with Artificial Intelligence: Aligning Language Models with Human Values
- (3-2034) Whose Opinions Do Language Models Reflect?
- (3-2023) Assessing Cross-Cultural Alignment between ChatGPT and Human Societies: An Empirical Study
- (3-2023) Probing Pre-Trained Language Models for Cross-Cultural Differences in Values
- (2-2023) Nationality Bias in Text Generation
- Cultural Incongruencies in Artificial Intelligence
- The Myth of Culturally Agnostic AI Models
- French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than English
- Artificial Intelligence, Values, and Alignment
- UNQOVERing Stereotypical Biases via Underspecified Questions
Find information on relevant conferences and workshops focusing on cultural alignment in AI:
- Cultures in AI/AI in Culture - A NeurIPS 2022 Workshop - December, NeurIPS 2022.
- Socially Responsible Language Modelling Research - December, NeurIPS 2023.
- Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP) - May, Association for Computational Linguistics 2023.
We welcome contributions! You can help by:
- Adding New Resources: Share new findings or resources.
- Updating Existing Entries: Ensure information is up-to-date and accurate.
- Enhancing Organization: Offer suggestions for a more user-friendly experience.
To contribute:
- Fork the repository.
- Make your changes.
- Submit a pull request with a detailed description of your changes.
Have questions or suggestions? Feel free to contact us.