Extended Intelligences
This course dives deep into the realities of AI, questioning its true nature, its environmental and societal impact, and its role in shaping our future.
Last updated
This course dives deep into the realities of AI, questioning its true nature, its environmental and societal impact, and its role in shaping our future.
Last updated
AI (Artificial Intelligence): Systems that simulate human-like tasks through data.
Data: Not just raw information but the "tracks" of human or natural activities, contextualized through analysis.
Machine Learning (ML): A subset of AI that enables systems to learn patterns from data.
Deep Learning: A further subset of ML focusing on neural networks to process complex information.
Large Language Models (LLMs): Advanced AI systems trained on massive datasets to generate human-like text.
Design fiction: Imagining speculative futures through prototypes to explore consequences and possibilities.
Solar-punk: A literary and artistic movement that envisions a sustainable future interconnected with nature and community.
Google Colaboratory: A web-based tool to run Python code with pre-installed machine learning libraries.
Neural Networks: Configurations of algorithms modeled to recognize patterns in data.
Dataset: A structured collection of data used to train AI models.
API (Application Programming Interface): A set of rules allowing different software programs to communicate.
Latent Space: An abstract multidimensional space representing internal structures of observed data.
The first week provided a rich foundation for understanding AI beyond its surface functionalities, encouraging reflection on its material, ethical, and conceptual dimensions.
One of the first realizations was how misunderstood AI is. We learned that AI isn’t truly artificial or intelligent.
Not intelligent -> ChatGPT functions as a "calculator" for processing language, not a sentient entity. This understanding demystified AI, making it clear that its capabilities are limited to specific tasks.
Not artificial -> It relies on vast infrastructures of data centers that are based on:
Earths - Rare and mined materials used in computer devices that then become E-waste affecting marginalized communities.
Water - Essential for cooling data centers and in the production of the devices (~0.5 liters of water per ChatGPT query).
Labor - From maintenance to hazardous work in marginalized communities.
Energy - Data centers consume ~2% of global electricity.
Heat - Generated during computation, requiring significant cooling resources.
These realities made me question how little we consider the environmental impact of our digital choices. Just as eco-conscious movements encourage responsibility in food consumption, shouldn’t we extend similar awareness to technology use?
One metaphor that stuck with me was how AI acts as a "mirror," reflecting human biases, values, and flaws. While we use AI to solve problems, it often perpetuates the inequalities embedded in its datasets. This forces us to ask: are we using AI responsibly, or are we amplifying societal issues like discrimination and disinformation?
AI influences not just our environment but also our perception of time. Its rapid development creates a sense of urgency and progress, yet it often locks us into the past. Historical data feeds these systems, embedding outdated views into new technologies. This means that while AI speeds up processes, it risks keeping us stuck in a "frozen past," disconnected from current and future challenges like climate change.
I found this concept of time fascinating. The metaphor of the torus-shaped perception of time, where past, present, and future are interconnected, challenged how I think about progress. Instead of rushing forward blindly, perhaps we need to pause, reflect on our origins, and learn from them to create more sustainable futures.
This week took a practical turn as we transitioned from theoretical discussions to applying AI tools to develop a project with a tangible outcome. We explored the potential of datasets and AI models to address future needs in the MDEF community, particularly within the speculative solar-punk scenario of 2040.
The initial hands-on exercises were eye-opening. Working with Google Colaboratory, APIs, and datasets gave me a sense of the intricacies involved in building AI models. In an example we worked with a dataset of dreams sourced from Hugging Face, using Google Colaboratory to analyze the data. Through an API from Replicate (powered by LLaMA), we generated textual prompts from the dream data and transformed them into visual outputs.
Google Collaborative notebook of the example
One of the most impactful realizations was the sheer computational power and resources required to train models. I was struck by how ChatGPT’s training process, which took about 14 days, and was something beyond the capacity of even Europe’s most powerful computer, MareNostrum, located in Barcelona. This underscored the environmental costs of AI, reminding me of its hidden materiality.
The collaborative project
Our initial concept revolved around creating a system to help students arrive on time for class by analyzing and optimizing their schedules. This later evolved into the idea of optimizing lesson breaks based on stress or hunger levels, but we struggled to find suitable datasets to support this. One dataset we encountered measured anxiety levels, but its implementation was too trivial to justify further development.
Ultimately, we pivoted to a more practical and impactful project: building a searchable interface for the MDEF community to access additional resources such as books, papers, and lectures. This collective repository would allow students to ask specific questions and receive curated answers based on the resources available.
The presentation below outlines our project's functionality, ethical considerations, dataset creation process, and the learnings we gained along the way.
Google Collaborative notebook of the final proyect
Working on this project was a highly collaborative and enriching experience. The process exposed us to new tools like Google Colaboratory and APIs, deepening our understanding of how LLMs can power interactive systems. A key takeaway was learning to code for AI, using AI itself as a resource, a challenging but rewarding endeavor that helped us overcome the fear of programming.
This course has profoundly changed how I view artificial intelligence. Initially, I saw AI as something abstract and autonomous, but now I understand it as a tool deeply rooted in physical infrastructures and human labor. Learning about the material and ethical impacts like the energy consumption of data centers was eye-opening. AI isn’t magical; it comes with environmental and social costs that we often overlook. This realization makes me more aware of how I use technology and more cautious about the impact of digital tools in my daily life.
One of the biggest takeaways was discovering the potential of creating customized AI models using datasets and APIs. This hands-on approach allowed me to see AI not as a universal solution but as a tool that can be tailored to address specific challenges with more precision and less resource consumption. For example, tools like Google Colaboratory have shown me how I can apply AI creatively to projects like interventions, data visualization, or even solving challenges within my design space.
Another reflection centers on AI’s role as a societal mirror. It reflects the biases and inequalities in the data it’s trained on, amplifying problems rather than fixing them. This made me question the hype around AI as a solution to everything and realize how disconnected its acceleration can be from solving urgent real-world problems, like the climate crisis. It’s a reminder that technology isn’t inherently good or bad, it’s about how we choose to use it.
Moving forward, I see AI as a valuable yet imperfect tool. I’ll continue using it, but with a more critical and informed perspective, focusing on projects where its benefits outweigh its costs. Instead of relying on general-purpose systems like ChatGPT, I’m excited to explore smaller, task-specific AI models that align with my work in design and sustainability. This course taught me to balance curiosity with responsibility, embracing AI’s possibilities while remaining grounded in ethical and intentional use.