For Day 6, I wanted to delve more into the idea of gestures and think about the kind of gestures I could train a machine learning model to recognize. From my Day 1 prototype, knot tying gestures can be extracted, and knitting and weaving also both have recognizable gestures. But for Day 6, since Day 5 was about cyphers, I wanted to create an object that can somehow contain information in a unique way, but that is not about weaving, knitting, or knot tying - a technique I have not explored before but nevertheless can be adapted for cloth.
I was fascinated by origami folding, and how the act of folding - and the sequence of folding - can function as a data encryption tool. Moreover, origami is a technique that allows the user to create nearly any form imaginable. One could in fact fold a treasure box and keep valuables inside. The key for opening the treasure box would be the right sequence of folds that creates the box form in the first place.
So picked a very simple shape, that requires modular units to be assembled together (modular design allows for collaborative making experience) to create a square box. Below is the video of me folding 6 units and then assembling them together to create the box. I then analyzed the common gestures used throughout the folding process and identified 3-4 distinct ones that could be used as a marker to keep track of the folding process - i.e. the machine could recognize these gestures and replicate the progress as a simulation which would be the digital 3D object.
Origami folding gesture analysis - video
Common folding gestures
4 fingers + index finger
2 fingers + thumb
3 fingers + thumb
4 fingers + thumb
5 fingers + thumb