Extended Intelligences II
Last updated
Last updated
The first day introduced us to the role of AI in design, exploring how LLMs and multimodal AI tools influence creative processes. We began with portfolio showcases from our teachers Pietro and Chris, followed by a hands-on session using the DOTTOD AI camera, which challenged our expectations of AI-generated visuals.
One of the most interesting projects for me was the , where iconic furniture pieces were described in text, processed through an AI model, and then physically recreated based on the AI’s interpretation. This project resonated with me as I’ve struggled with AI-generated 3D models not aligning with my descriptions. However, seeing this approach reframed my perspective: rather than viewing AI as an unpredictable tool, it can serve as a co-creator, producing unexpected yet unique outcomes.
The image depicts a playground slide set in a park-like area. It features a metal slide that curves gently downward. A person is lying belly-first on a red and orange buoy-like mat, facing downward, ready to slide down. The surrounding environment includes several trees with bare branches, indicating it may be late in the year or early spring. The ground is covered with a mix of dirt and scattered leaves, suggesting the presence of the trees nearby. In the background, there are some partially visible modern buildings, hinting at an urban setting. The slide itself is positioned at a moderate incline, and its surface shows some scattered leaves, suggesting it hasn’t been cleaned recently. Overall, the scene captures a moment of playful fun in a tranquil outdoor space.
This exercise highlighted AI’s biases and constraints, particularly in how it defaults to certain aesthetic stereotypes. While AI can generate convincing visuals, it still lacks deep creative understanding, reinforcing the importance of human intervention and critical awareness in AI-driven design
The second day delved into the fundamentals of neural networks and diffusion models, providing a structural understanding of LLMs and how they process data. We explored concepts such as input layers, hidden layers, and output layers, using a housing price prediction model as a tangible example. This breakdown helped clarify how AI calculates probabilities to generate outputs based on various weighted inputs.
For the final day of the course, we experimented with Arduino and embedded AI to develop our own AI-powered projects. Our group focused on using the ESP32-CAM to create a wearable navigation system for green spaces. The idea was to develop a garment that could detect plants or natural environments and provide haptic feedback through vibration motors to guide the wearer toward green areas. The motors would activate in different directions: left, right, or center, to indicate the optimal path toward nature, effectively functioning as an AI-powered green space GPS.
To get started, we installed the necessary libraries in Arduino IDE:
ChatGPT_Client for AI integration
AdafruitSSD1306 for working with the camera
ESP32 package, which we had already installed from previous work with Barduino
I focused primarily on modifying and debugging the Arduino code to integrate the camera and vibration motors. Initially, we tested the base code that was preloaded in the ESP32, which detected people and turned on the camera’s built-in LED when it identified a human presence. After ensuring the AI was working correctly, we modified the prompt in Config.cpp to detect eye contact instead. Finally, we adjusted it to recognize plants or anything related to nature and trigger the response accordingly.
To transition from an LED-based output to vibration motors, we had to define new variables and modify the .ino file to activate the motors instead of the light. Since vibration motors function similarly to LEDs in Arduino (simple HIGH/LOW states), this part was straightforward. However, we encountered hardware limitations. When trying to connect multiple vibration motors (three in total), the program failed. With only two motors, the AI still functioned, but the motors didn't respond, likely due to insufficient current. Also the reaction time of the camera was not good (as the quality and processability of the images has to be balanced).
Due to these limitations, the final prototype only had one motor, which vibrated when no plant was detected and remained off when a plant was in view (essentially working in reverse of our initial concept). During the feedback session, we were advised to use a single motor with varying intensity levels to convey different responses instead of multiple motors. This approach would simplify the design while maintaining clear and intuitive feedback for the user.
For me, throughout these sessions, I reinforced my idea that IA doesn’t just do what you tell it; it has its own weird logic, biases, and limitations, but that’s also really interesting. Like the way how unpredictable AI can be. When we experimented with the AI camera, we expected specific results, but what we got was completely different. It made me think about control in design and how much do we actually control, and how much do we just guide. This is something I’ve struggled with before, especially when trying to apply AI for 3D modeling to create printable products for my studio. It never gives me exactly what I want and I always end up frustrated.
The group project was another big learning moment, It was really nice to work with the ESP32 camera module and really integrate AI in a real product to improve or solve something. We had a clear and "simple" idea but the reality of making it work was more challenging than expected. It made me realize that working with AI is more about problem-solving than being creative and I don't really like that idea.
For my own MDEF project, this course reinforced something I’ve been thinking about AI a lot: It doesn’t just replace human creativity but changes how we create. With my sweat experiments, I’ve been looking at the body as a material source emphasizing what it can do on its own rather than relying on external intelligence. Sweat is such a raw, human material, and part of my exploration is about reclaiming it as something valuable so I don’t immediately see a strong role for AI in my project. Howewer, after this course, I do feel more comfortable working with AI. Even if I don’t see an obvious application right now.
Later, we experimented with the , a tool that captures a "reality snapshot" and modifies it based on text prompts. We took four photos and attempted to transform them, but the AI’s interpretations were often surprising and inaccurate. For instance, a photo of Andrea on a slide was meant to depict a joyful water park scene, yet the AI generated a melancholic and desolate atmosphere due to our vague prompt. Conversely, when we tried to be highly specific, such as turning ourselves into cyborgs, the AI struggled to understand the request, revealing its limitations in conceptual reasoning.
For the hands-on session, we experimented with , a tool that allows for AI-driven project prototyping inspired by synthesizer modulation matrices. By inputting an image of my current project, I explored how AI could reinterpret and evolve it into a more advanced artistic concept. This process revealed how AI can be used not just as a creative tool, but as a collaborator that introduces unexpected elements into the design process. It reinforced the idea that AI-driven creativity is not about precision, but about exploring new possibilities beyond human intuition.
Here there are the for Barduino and ESP32 Camera
"You see two person in the picture, plase create a motorbike from this two person. The motorbike has to look like a real motorbike, the all meccanism and engeeniring part."
"You see in this picture two guys. Please link their arms, leg and body for create a motorbike. This motorbike has to be like a real mecchanism as a real motorbike work."
"Make a flesh motorbike that is parked next to the conventional bikes, realistic. Limbs have a purpose. Human evolved. Produced by harley davidson. Include body modifications and decorations. Transformer adjacent. With empathy"