Javier Serra - MDEF Portfolio
HomeTermsAboutSweat Matters
  • Term 2
  • ‎
    • Design Space
    • Microchallenges
    • Reflections
    • Interventions
    • Design Dialogues
    • Pictorial
  • Courses
    • Living Materials
      • Crafted material: Bio-fan
      • Living material: Hairy mycelium
    • Situated Design Practices
      • Personal Reflection
    • Communicating Ideas
    • Design Studio 02
      • Design Space Evolution
      • Ethics reflection
      • Research framework
      • The sweat conflict
    • Collective Intelligences
    • H(n)MI
    • Extended Intelligences II
Powered by GitBook
On this page
  • Day 1: AI in design and AI camera
  • Day 2: Architecture and experimentation
  • Day 3: AI embeded artifact
  • Personal reflection
  1. Courses

Extended Intelligences II

PreviousH(n)MI

Last updated 3 months ago

Day 1: AI in design and AI camera

The first day introduced us to the role of AI in design, exploring how LLMs and multimodal AI tools influence creative processes. We began with portfolio showcases from our teachers Pietro and Chris, followed by a hands-on session using the DOTTOD AI camera, which challenged our expectations of AI-generated visuals.

One of the most interesting projects for me was the , where iconic furniture pieces were described in text, processed through an AI model, and then physically recreated based on the AI’s interpretation. This project resonated with me as I’ve struggled with AI-generated 3D models not aligning with my descriptions. However, seeing this approach reframed my perspective: rather than viewing AI as an unpredictable tool, it can serve as a co-creator, producing unexpected yet unique outcomes.

The image depicts a playground slide set in a park-like area. It features a metal slide that curves gently downward. A person is lying belly-first on a red and orange buoy-like mat, facing downward, ready to slide down. The surrounding environment includes several trees with bare branches, indicating it may be late in the year or early spring. The ground is covered with a mix of dirt and scattered leaves, suggesting the presence of the trees nearby. In the background, there are some partially visible modern buildings, hinting at an urban setting. The slide itself is positioned at a moderate incline, and its surface shows some scattered leaves, suggesting it hasn’t been cleaned recently. Overall, the scene captures a moment of playful fun in a tranquil outdoor space.

create a little funny boy in a water slide having fun and beeing very joyful. The context is an acquatic part

My prompt has full detail so no need to add more. DO NOT add any detail, just use it AS-IS. Use the following information as the base details to generate the image: create a little funny boy in a water slide having fun and beeing very joyful. The context is an acquatic part Scene description: The image depicts a playground slide set in a park-like area. It features a metal slide that curves gently downward. A person is lying belly-first on a red and orange buoy-like mat, facing downward, ready to slide down. The surrounding environment includes several trees with bare branches, indicating it may be late in the year or early spring. The ground is covered with a mix of dirt and scattered leaves, suggesting the presence of the trees nearby. In the background, there are some partially visible modern buildings, hinting at an urban setting. The slide itself is positioned at a moderate incline, and its surface shows some scattered leaves, suggesting it hasn’t been cleaned recently. Overall, the scene captures a moment of playful fun in a tranquil outdoor space.

Visualize a small, jovial boy in a water park, merriment radiating from his face as he navigates a water slide. Sketched around him is a setting resembling a city park, with a water slide instead of a regular one - its metallic sheen reflecting the surrounding merriment. A red and orange buoy-like mat tucked tightly under his belly accentuates his excitement. Arrayed around the slide, naked trees stand, their bare branches hinting at the change of seasons. The earth beneath is a rustic collage of soil intermittently strewn with parched leaves. In the distant backdrop, outlines of modern buildings float, rooting the location in an urban locale. The uncared-for slide, spotted with leaves, attests to its recent abandonment. The scene encapsulates a singular moment of childlike enjoyment amid a serene, urban outdoor scape.

This exercise highlighted AI’s biases and constraints, particularly in how it defaults to certain aesthetic stereotypes. While AI can generate convincing visuals, it still lacks deep creative understanding, reinforcing the importance of human intervention and critical awareness in AI-driven design


Day 2: Architecture and experimentation

The second day delved into the fundamentals of neural networks and diffusion models, providing a structural understanding of LLMs and how they process data. We explored concepts such as input layers, hidden layers, and output layers, using a housing price prediction model as a tangible example. This breakdown helped clarify how AI calculates probabilities to generate outputs based on various weighted inputs.

The image shows a person in a kitchen setting being wrapped in clear plastic wrap or film, with layers of paper towels underneath. The person seems to be involved in this process willingly, perhaps as a part of a prank or costume creation. A hand is assisting with the wrapping, suggesting collaborative involvement. In the background, there are kitchen items like jars, a plant, and a box.

I want to transfor that scene in an artistic practice meant to collect my sweat. showcase me in a art gallery and being my sweat the pratogonist

To transform this scene into an artistic practice focused on collecting and showcasing sweat in a gallery setting, consider the following steps:

  1. Conceptualization:

    • Define the artistic concept and intention. Perhaps the work could explore themes of human exertion, biology, or the intersection of the body and everyday materials.

  2. Materials:

    • Use clear plastic wrap or film to encapsulate the body, allowing visibility of the sweat against the skin.

    • Incorporate absorbent materials like paper towels or sponges underneath to enhance the collection of sweat.

    • Consider adding an inner layer that changes color upon absorbing moisture to visually emphasize the sweat collection.

    • Ensure the materials are skin-safe and non-toxic.

  3. Presentation:

    • In the gallery, the participant could be seated or standing within a designated space, capturing the audience's attention as a living sculpture.

    • Install mirrors or cameras to provide multiple angles and live-feed projections, enhancing the viewer's experience and interpretation.

    • Use lighting strategically to highlight the beads of sweat, creating a shimmering effect that draws attention to the act of perspiration.

  4. Audience Interaction:

    • Allow the audience to engage by observing the bodily process in real time, blurring the lines between viewer and viewed.

    • Provide information cards or a multimedia display that explains the significance of sweat, its cultural perceptions, or personal meaning.

  5. Documentation:

    • Capture high-resolution photographs or video footage of the performance for archival purposes and further analysis.

    • Consider making these visual recordings part of the exhibit, offering an extended life to the ephemeral nature of the live action.

  6. Safety and Comfort:

    • Ensure the participant has access to plenty of hydration and breaks, considering the impact of being wrapped and exposed to gallery lights.

    • Designate a facilitator or gallery staff member to monitor the participant’s well-being throughout the performance.

By focusing on the physical and metaphorical aspects of sweating, combined with the aesthetic of everyday materials, this project can provoke conversation and reflection on the human body's relationship with art and its environment.


Day 3: AI embeded artifact

For the final day of the course, we experimented with Arduino and embedded AI to develop our own AI-powered projects. Our group focused on using the ESP32-CAM to create a wearable navigation system for green spaces. The idea was to develop a garment that could detect plants or natural environments and provide haptic feedback through vibration motors to guide the wearer toward green areas. The motors would activate in different directions: left, right, or center, to indicate the optimal path toward nature, effectively functioning as an AI-powered green space GPS.

Goal: turn on the central motor vibration if there is a plant or something related to nature (leaves, tree, grass) in the image, else turn either the right or left motor

To get started, we installed the necessary libraries in Arduino IDE:

  • ChatGPT_Client for AI integration

  • AdafruitSSD1306 for working with the camera

  • ESP32 package, which we had already installed from previous work with Barduino

I focused primarily on modifying and debugging the Arduino code to integrate the camera and vibration motors. Initially, we tested the base code that was preloaded in the ESP32, which detected people and turned on the camera’s built-in LED when it identified a human presence. After ensuring the AI was working correctly, we modified the prompt in Config.cpp to detect eye contact instead. Finally, we adjusted it to recognize plants or anything related to nature and trigger the response accordingly.

To transition from an LED-based output to vibration motors, we had to define new variables and modify the .ino file to activate the motors instead of the light. Since vibration motors function similarly to LEDs in Arduino (simple HIGH/LOW states), this part was straightforward. However, we encountered hardware limitations. When trying to connect multiple vibration motors (three in total), the program failed. With only two motors, the AI still functioned, but the motors didn't respond, likely due to insufficient current. Also the reaction time of the camera was not good (as the quality and processability of the images has to be balanced).

Due to these limitations, the final prototype only had one motor, which vibrated when no plant was detected and remained off when a plant was in view (essentially working in reverse of our initial concept). During the feedback session, we were advised to use a single motor with varying intensity levels to convey different responses instead of multiple motors. This approach would simplify the design while maintaining clear and intuitive feedback for the user.


Personal reflection

For me, throughout these sessions, I reinforced my idea that IA doesn’t just do what you tell it; it has its own weird logic, biases, and limitations, but that’s also really interesting. Like the way how unpredictable AI can be. When we experimented with the AI camera, we expected specific results, but what we got was completely different. It made me think about control in design and how much do we actually control, and how much do we just guide. This is something I’ve struggled with before, especially when trying to apply AI for 3D modeling to create printable products for my studio. It never gives me exactly what I want and I always end up frustrated.

The group project was another big learning moment, It was really nice to work with the ESP32 camera module and really integrate AI in a real product to improve or solve something. We had a clear and "simple" idea but the reality of making it work was more challenging than expected. It made me realize that working with AI is more about problem-solving than being creative and I don't really like that idea.

For my own MDEF project, this course reinforced something I’ve been thinking about AI a lot: It doesn’t just replace human creativity but changes how we create. With my sweat experiments, I’ve been looking at the body as a material source emphasizing what it can do on its own rather than relying on external intelligence. Sweat is such a raw, human material, and part of my exploration is about reclaiming it as something valuable so I don’t immediately see a strong role for AI in my project. Howewer, after this course, I do feel more comfortable working with AI. Even if I don’t see an obvious application right now.

Later, we experimented with the , a tool that captures a "reality snapshot" and modifies it based on text prompts. We took four photos and attempted to transform them, but the AI’s interpretations were often surprising and inaccurate. For instance, a photo of Andrea on a slide was meant to depict a joyful water park scene, yet the AI generated a melancholic and desolate atmosphere due to our vague prompt. Conversely, when we tried to be highly specific, such as turning ourselves into cyborgs, the AI struggled to understand the request, revealing its limitations in conceptual reasoning.

For the hands-on session, we experimented with , a tool that allows for AI-driven project prototyping inspired by synthesizer modulation matrices. By inputting an image of my current project, I explored how AI could reinterpret and evolve it into a more advanced artistic concept. This process revealed how AI can be used not just as a creative tool, but as a collaborator that introduces unexpected elements into the design process. It reinforced the idea that AI-driven creativity is not about precision, but about exploring new possibilities beyond human intuition.

Here there are the for Barduino and ESP32 Camera

AI camera of the collective
Modmatrix
code examples
Gallery of Cybernetic Interpretations
Cover

"You see two person in the picture, plase create a motorbike from this two person. The motorbike has to look like a real motorbike, the all meccanism and engeeniring part."

Cover

"You see in this picture two guys. Please link their arms, leg and body for create a motorbike. This motorbike has to be like a real mecchanism as a real motorbike work."

Cover

"Make a flesh motorbike that is parked next to the conventional bikes, realistic. Limbs have a purpose. Human evolved. Produced by harley davidson. Include body modifications and decorations. Transformer adjacent. With empathy"

8KB
ESP32CamAI_natureGPS.zip
archive
Our code
Original pic
AI generated