Articles, Blog

2019.09.21 Machine Learning for Designers at Fresh Eyes Berlin

January 28, 2020


Hi, everyone! My name is Nono
Martínez Alonso and I’ve been doing some research on machine learning
for designers, what I bring today is an overview or an introduction of what this
shift in mindset or technologies means for designers architects or even artists
and creators and I will also if we have time I will also give and provide a few
case studies or some examples of projects that I’ve done personally or
I’ve used project from other people like runway for creative endeavors and okay
so let’s get started so first of all I would like to thank here the theme of
the fresh eyes workshop so Matt Turlock and Kyle Steinfeld are here and then some
other people are overseas Adam Menges and Kat and Samantha Walker were part of
theirs my geometry workshop in 2018 so they’ve been doing a lot of research
around what it means to use machine learning tools or generative
architectural design and this is not part of this talk but it’s you know
within the same framework so what we’ve seen over the last decades is shift
between doing things manually like writing with ink for example to doing
them mechanically and then more lately we’ve seen a shift of doing digitally or
even with tapping with your fingers on the phone and what we see now is doing
those things with intelligent sometimes interfaces fade away so you can see that
there was something tangible like the your notebook that then goes to be a
mechanical typewriter thing a computer and then Siri makes it fade away in some
way right Kevin Kelly is a futurist and he wrote
that anything that we to which we added electricity before now we’re trying to
add intelligence so any process that you saw that was and I see some nodding
heads here so anything that we tried to automate with electricity or with some
sort of power we’re now trying to make more intelligent maybe to respond on its
own or to work in in a more efficient way or whatnot right and this transition
is also happening on what we call the beam world the construction industry we
did things manually before then we mechanically automate them we use robots
and now we’re trying to use robots but are a bit more intelligent they are
aware of their environment maybe or they have some sort of way of understanding
the real world but why do I think that this shift or
this change is important in the design field right what I believe is that it’s
really important for the end goal that we get in a design project the
interfaces that we use right we used to have a pencil to just sketch on paper
and we went from doing this by hand right writing our line or drawing a line
to do it automatically with a computer right by clicking two points and there
are some things involved with this process right one thing that is not in
the scope of this presentation is that doing this manually actually gives more
hints to the machine learning algorithm of what you’re doing because these four
lines are different in ourselves but these can be the same for thousands of
people I give you thousands of people click the same two points the lines are
going to be exactly the same but as we’ve seen if you do it by hand each
person will add different character to their drawing and today we’re also
defining buildings with data like this is a screenshot from Revit for instance
and sometimes we don’t even draw lines or geometry we just move sliders and
create parameters and we also use code as an interface right we just code one
line and we can generate some some geometry
and we what we’re seeing now is interfaces that might allow us to do
things like asking about to help us color drawing or a house or something
else and this presents a new paradigm right so this is part of the the work I
did at the GSD and these new paradigm reads that by creating a dataset us as
designers can become a programmer right and choosing the right set of images
becomes part of the design process because the images are what happens yeah
well I mean metaphorically right because you are changing the part of the
learning algorithm that is gonna change how it behaves so you’re tweaking a
program to work differently by feeling it new images well I mean depending so
here instead of writing code to change an algorithm we’re feeling it images and
we are just in the algorithm right and again like the transition goes from
manual right from analog to digital and what we see now is the automation era
where we go from code and data to also adding intelligence to those processes
and you can see in machines allow us to automate or to skip processes that we
used to do and maybe we can add more quality more accuracy to the processes
that we’re doing and one thing that I like the thing is that robots or
machines replaced tasks not jobs and what we see is that as technology
advances we keep moving the the threat so of what we consider artificial
intelligence right every time we reach a new goal that we certain years ago as
artificial intelligence we move the bar and because we’ve controlled that we say
ok that’s just an algorithm the new bar is what we haven’t reached is artificial
intelligence right so we keep replacing small tasks and then enabling for us to
do our design or work in a different way and to summarize this I believe that
automation and intelligence will lead us to be able to skip tasks that are not
necessarily designed are more automated processes iterate more and better and
offer new ways of interaction that with real-time feedback make the design
process really different one example is that you know you can shorten the time
of an iteration of simulation from 15 minutes to 1 second and that allows you
to iterate maybe in between many different design options in one minute
instead of in a couple days or hours this is one example in which our robot
is replacing a specific task so it’s just moving timber studs and it’s ok
doing them in place but there is also a human in the loop that needs to make
sure that that’s put in place and I quote from Yankee oh and a
conversation I had with him is that some things are just work right when you’re
you know you already know how the design is and you just need to place a thousand
beams that’s just work a computer can do it
for you there’s no more decisions to take everything is deterministic so you
can hand that off to an algorithm right and we don’t need to do that designers
one thing that I wanted to do as part of this workshop as well is to clarify a
bit what we mean when we talk and many of you might be already familiar there
are a little bit little concepts that I want to clarify so we have computational
design genetic design artificial intelligence machine learning and
general diffusion intelligence and what computational design we don’t know what
it is but we define a set of instruction rules and relationships that define the
steps to obtain a design it’s geometry and its information generative design
Kyle has explained this concept with a small tweak for the workshop but I
general define it as processing which the designer defines the design
parameters and the computational model that generate the geometry and the
machine generates design options evaluates them against a series of goals
and improves those goals according to the obtained results they try to
optimize and then ranks each of the design options according to those
obtained results to meet the goals that you as a designer establish in the
process right so you’re telling the Machine what you want to get to but the
best outcome that you could get on a computer is trying to optimize for that
this is a diagram of interactive design workflow where the computer generates it
analyzes and ranks and they need evolves right so you it generates sees how good
it is and then it evolves and tries to make it a bit better every step of the
loop in artificial intelligence right we now it’s a it’s a hype term so we all
know what it means but I want to make a small distinction so in general it’s all
the machines and algorithms that try to imitate how we say
our cognitive functions but there are some nuances right traditional
artificial intelligence our methods and we call classic artificial intelligence
they were developed in the 70s so many of the things that we use today were
developed 4050 years ago like classic search in attic algorithms concerns of
the faction problems or many others machine learning that’s what we’re
focusing more today on is algorithms are capable of learning and improving from
example without being explicitly programmed for a new set of data and
this is also had a recent boom because of new neural network architectures and
the power of of GPUs and then the the general artificial intelligence is what
we see as the you know as a god-like creature that knows how to replicate how
we think how we act how we take decisions right so I’m gonna speak
through this project that I’ve talked about sometimes in other places so say
yes if drawing is what was my master’s thesis at the Harvard Graduate School of Design
and I tried to get a bit detached from the architecture side of things and go
to drawing as a way to explore how what row machine learning place in the design
field right and also because as we commented before it’s easier for
machines to work with images with raster data as of today than with vector data
so we in general use drawing also I think to represent the world as we see
and we’ve seen that there is already drawing software that makes our lives
easier so for example Photoshop lets you do copy/paste and do or replicate things
and CAD makes it really easy to create geometries or repetitive forms and
complex geometry right but machines in the process don’t really participate
right if we define that we understand the world by seeing it and then
representing it machines don’t really happen right and again there is a shift
where now some machines or some programs know how to interpret the world with
really what’s called really I think they’re called narrow way eyes my narrow
means this program only knows how to draw a bird so I give
it a circle and you draw supper I write a world and you draw supper it doesn’t
matter what you do he’s gonna try to make it into a bird and these booming
artificial intelligence and machine learning allows us to for instance help
a computer so a computer to help us continue or I did a drawing right that
was the premise of this project in this project I also tried to run away from
from the classical toolbar right from point-and-click in computer programs and
and also provide an alternative to what we see on the left right but we see on
the left is a process where you really know your outcome you know what you want
to solve it’s a kind of you want to get out of that loop and a computer can
provide you with that path right the optimal path to go up on the right would
you have is more of a sculptural process where every time you have a different
shape and you can browse through it and reevaluate were you going this is what
we can see in this project that every time that the machine or the party fish
intelligence is bringing you a suggestion to how to color your drawing
you might continue drawing a different way the application itself was an iPad
app so you had on the Left all the agents that were helping you might be
other humans with a tablet or a computer or a phone or bots that would allow you
to continue drawings and on the right you would see something that seems like
Photoshop layer but is the history of the drawing that each of the agents has
been adding to the project and actually you can see everything that is in here
in none of the MA /ai and you have the whole set of resources
and things and on the middle you have the drone right we all know what drawing
me looks like but this was a shared canvas that everyone would be seen at
the same time and what I developed was a set of bots right but each of them had a
really narrow but a specific behavior right texture for instance would
generate a texture for a drawing that you sketch
continued ATAR would try to continue our drawing with the next steps that you
need an sketcher will fill your drawings with something that looks like a hand
sketch texture right and the other names are self-explanatory
I didn’t implement the ones in the bottom but they were conceptually there
and there is a clear way that they could be developed so we have for example
colorize your classifier rationalise around learner right the concept of
learner is probably the most interesting because it refers to creating a feedback
loop where while you’re creating the history of your drawings the bot could
learn from what you’re doing and readjust its knowledge for future
suggestions this is the first actual prototype that I created a few weeks
before presentation and here I was telling the bot to give me suggestions
for him for a daisy and I was throwing daisies right but you can see this is
what’s called a generative adversarial network that can interpolate result
there were no flowers with rectangular petals in the training set but the
algorithm tries to squeeze something into that shape to generate something
that looks like a flower here a bit of background on the system right it runs
Pix2Pix in the background I didn’t make this neural network
architecture it was done by Phillip Isola and his team at Berkeley and also on
the right there was support to TensorFlow that made me able to use it
I was made by Christopher Hesse you might know this project because he was really
viral was the edges to cuts morrow and and one thing that I will highlight
there is that you you get well let let’s focus on this visa so this is texture
this is the one I explained before where I train with different types of flowers
different types of dresses or or other training sets and I was just exploring
what the machine would actually do right this one was the the sketcher so I
trained I think I use hand sketched drawings that
sixty-four sketches of trees that were made by by my mom actually she’s it is
as I do and she while I was in in Cambridge she was in Spain and she said
what can I do to help you out so she actually sketched
64 trees and we trained oh no no I’ll show that later and then this is
probably what I like the most is the this is more a more abstract one where I
try to use the neural network to draw things that it doesn’t expect to be
drawn right it’s really easy in like boding to make the the algorithm work
when you know what it’s good for but when you try to draw something that
is not been trained for it’s a bit challenging so it doesn’t really know
and it just tries to generate some texture for it right that’s kind of the
the edge case where it won’t work but maybe yeah but maybe it gives us some
inspiration right for drawing so this is sketch of 1000 dresses it didn’t work
that well but is 1000 dresses I scraped from the Met Museum website it’s open
it’s like hundreds of thousands of things there to get and what’s easy from
this is that we can get it online right we scrape it we get it we have a
training set really quickly sometimes there are copyright things that we can
not use it or other things but what looks more like a human with act is to
actually do it by hand right so these 64 trees are hand sketch on the Left we
just do edge extraction with something like Mathematica or even I think with
Photoshop you can do it and you get something like this right so this is a
training set that you use augmentation techniques to have maybe 400 samples
rotating or scaling or things like that and you can train something like pigs to
pigs with this right yeah so it it gets the the drawing of an outline and it
tries to fill it with a texture how do you with Mathematica so
Mathematica has an H extraction algorithm yeah yeah and you have to I
mean the only thing you have to do is adjust with the parameters so it looks
like no so yeah so just as a as a recap right so
the the importance of this project I think is that each training set is going
to deal a unique behavior on the algorithm when you change the the images
we’re gonna get a completely different result and it’s easy because we don’t
have to code a line of code right we don’t have to write a line of code we
just input and this is what I did right over the 30 last days of the project I
would gather a training set during the day I will put it to train at night out
after eight hours I will wake up and the the thing would be ready and then you
have to try it and I would say that 60% of the things I tried doing work well by
when I started to learn what sort of training set worked better and that’s
when I started to have a bit more fun but sometimes you would intuitively
think okay this thing is gonna learn a lot of patterns from what I’ve given it
but then you wouldn’t really see anything and then some other times like
with flowers it would be super responsive because
it’s there is a really repetitive pattern on how flowers appear in
pictures but okay so that was it for suggestive drawing images show the last
thing last I would like to show a few experimental things with runway the
runway is a really new tool it’s been developed by cristóvão
Valenzuela and his team and this is a startup that they started I don’t I
think maybe a year ago or so and what they’re trying to do is to offer a
framework around application or machine learning for creators right for creative
people who maybe don’t know a lot about Python a lot about coding a lot about
github and cloning repositories and doing all the things and maybe if you
don’t understand what I’m saying that’s that’s for you as well and they’re
trying to make it easy to download a model to get it running on a server to
get it running locally in your machine or run
on the cloud and also to train models right that’s one of the things that
they’re working on now so to make it easy to train a new new model in
different architectures it looks for me so I actually I interviewed personally
Christopher last year and one of the things she said that I find funny is
that one of his favorite user interfaces is Spotify and these to me looks really
like the Spotify from machine learning you can see here it seems like it’s it’s
changing a bit this is from a few months back but it looks similar to this so
excuse me at home just trying it out so this is me just creating a new workspace
as I call it with the PostNet model supposing this is an open-source model
to to get your skeleton from video or images and just by downloading that
model in the background you can get this running so I didn’t have to write any
code I didn’t have to learn how the algorithm works what they do is that
they generate they get the feed from the camera from a video or whatever you get
and then they output this in JSON format so they render it here but it goes in
JSON format and you could get it from any other
application might be a website or grasshopper or any other place where you
want to read that information right the same thing for this so this is tile
transfer so this is I you have models here like Sasan or other artists and you
can do that life and then I got some pictures from my old pictures from my
parents and this is to colorize images so you can see live from the webcam you
can color photographs right so these are just models of what they’re creating is
the infrastructure and wrapping up models and open source and the community
is also adding models so what is really good what I see really good from this is
that they’re democratizing the access to machine learning right because even to
myself is hard to to find the time to learn how to use different model yes yeah yeah it’s drained on skeletons
of human beings runway and it’s not yeah and runway is
now open betim beta is open better so you can download it and try it out is
really nice so I forgot the the name of the two
other teammates but they’re tight well Jean cogan GN cogan is not I think he’s
I think he’s on what they go I think he’s on what he was the advising board
or something as as hi now you don’t get the names
as the codeine tray guy codeine Train general Denny achievement yeah so I
think they’re on the they advise them and they help them out but there are
three core people in the company and then that their team is is growing now
okay so this is another quick example this one is just conceptual I’ll kind of
pass through it so we’ve seen this model that I showed before that you can go
from edges to Tacey right and this is in Spanish I didn’t get time to translate
this stuff so the concept here is that you just go from one drawing to a
texture right so you have a model that does that and what we’ve been talking
this morning is but you might get a model that scores ranks above sort acts
that right so a classifier that tells you does this look like a daisy or not
right and and maybe some models that lets you find sort of categorized by
type and then you can embed that in something like grasshopper dynamo and
literally you just input an image maybe get the output and then if you classify
to see if that looks like a flower you can then loop back into a generative
model and see how you can optimize that to look more and more like an like
flower so that would be a process where you’re optimizing your output to be more
similar to a kind of flower every time and this is another experiment so this
is just to show that I at work sometimes we get a one once a year we get a week
or two to the experimental project so I played with runway and I play with one
of those style transfer algorithms so in our application that is actually the
cluster that they’re in a workshop right next to us so this was early stages of
one of our apps and these are meant to just look like 3d cubes but I connected
it to runway 2 to render a texture in real time all right so this is speed up
like maybe five or ten times but you will get runway to generate a texture
for that image all the time so what happens here is that with a plain cube
you’re getting something that looks like it’s been drawn by by pencil whereby
with a brush and oil paint this is another style and that’s it and that’s it I the only
closing comment I have so with this image is that this is something that
might also present a possibility for you know skipping the rendering process of
of an image so you have cubes on the left and then you get a texture thing on
the right that’s what we were doing in this image right so we save not only
time but even may be paying for v-ray for some other software because these
models embed styles and you can manipulate something in 3d and get an
output in real time yeah thank you you

1 Comment

  • Reply garciadelcastillo October 18, 2019 at 11:42 pm

    This is so cool 🙂

  • Leave a Reply