Through research and experimentation, The Machine & Me (Vol. 1) follows the process of taking the otherwise intangible field of deep learning and distilling it to be understood by the average learner.
I'm not a computer scientist...
Or at least I never claimed to be.
If we’re being honest,
only last year did I learn files shouldn’t be saved to your desktop.
Over the course of 100 days, however, I embarked on an experiment to understand Machine Learning.
If you are not familiar with this term, worry not: The contents of this project examine in detail my journey in learning it myself.
Prior to this journey, I finished all requirements but four (4) free electives in my Undergrad for Architecture at Thomas Jefferson University.
I formatted what I call my “bonus” semester to aid in my pursuit to understand Machine Learning. The study integrated concurrent courses in Digital Photography, Digital Imaging, and Python Programming.
Under the guidance of Dr. Kihong Ku,
I present:
Artificial intelligence, or more commonly referred to as "AI," is the endeavor to replicate human intelligence in machines. The goal of AI is to improve life for humans.
What those improvements are, is up for us to decide. AI is the current paradigm for machines' next step in evolution.
Within the broader concept of AI, lies more paradigms that further that effort. AI is already improving daily lives: Alexa, Roomba, Google Security Systems, your Netflix algorithm. AI is utilized by the medical, technological, and creative fields alike.
We can think of AI like an overarching term. Machine Learning and Deep Learning are subsets. Those subsets have subsets. And a lot of those subsets work hand-in-hand. No subset is generally “better” or more advanced than the other. In fact many work together and perform different tasks for different needs.
The specific subset I, as a designer, grew interested in is Generative Design, and more specifically, Generative Adversarial Networks (GANs).
Designers might be familiar with the term “generative” or “generative design.” Artists generate designs through the iterative process.
Generative design and GANs are tools that aid designers in the iterative design process.
Generative Design is the most current paradigm in computer systems and technology to be adopted by Architecture.
Artificial Intelligence and Generative Design are not new concepts. AI has been around since the 60’s, and generative design since the 90’s.
There are countless resources exploring both, specifically in pertinence to architecture. Additionally, there are various programs that work intimately with programs designers are well versed in.
For non-designers: architectural designers use 3D modeling programs like Rhino and Autodesk Revit, and Digital Imaging tools like Adobe Photoshop and Illustrator.
Fractal is an open source AI-based platform that works as an optioneering plugin for Dynamo (an extension of Revit).
Dynamo itself has implemented generative design capabilities to its platform. Adobe has introduced AI features like StyleTransfer and Object Detection.
For the course of this project, I worked with RunwayML.
RunwayML is a platform non-specific to architecture that allows non-coders to experiment with AI through a computer science lens. In conjunction with the programs, there are countless Youtube tutorials and Medium articles.
In four months, I taught myself, not only overall concepts of generative programs, but the nitty gritty details of how the programs work.
It took me a year and a half, however, to fully grasp generative design.
The concept was first introduced to me in Experimental Modeling* in the Fall of 2020 alongside concepts of computational design, parametricism, biomimicry, emergence, systems thinking (I could go on for a while).
Additionally, these concepts utilize logic, terminology, and principles foreign to new designers. Climbing this learning curve is a project unto itself, but once climbed is a valuable tool to the design process.
*A course at Thomas Jefferson University instructed by Dr. Kihong Ku.
Generative Design is beneficial to the design process because it expedites the iterative process.
When machines were first realized, their objective was to perform quicker and more precisely than humans.
Like The Babbage Difference Engine first performing calculations,
Generative machines can produce infinite design iterations at rates which are not humanly possible.
Fractal and Dynamo, for example, cut production time on trivial problems like program placement, layout, size, and organization.
An infinite amount of iterations sounds intimidating, but optimization within programs can rank iteratives based on qualifications and offer the best solutions.
Let's take a look at what this means:
In my final year of undergrad architecture studio, I studied healthcare infrastructure in rural Malawi. My final project involved Infectious Disease Units.
My design group struggled in designing program adjacency, optimal circulation, and safety. We iterated through countless floor plans each; trying various methods to find a qualifier that met our priorities.
This process took two months. The floor plan was critical to the design, but so were many other components. When we reached a satisfactory (but not perfect) rendition, we only had a month left.
A key selling point of our project was a bay of Intensive Care Unit rooms that provided safe visual acuity from the patients to both staff and loved ones.
This concept elicited empathy in design.
However, it was undercooked.
Additionally, our group was brimming with innovative solutions, in addition to visual acuity, but time cut us short.
Had we had access to generative design tools,
we could have had our floor plan done in minutes, not months.
This would have left room for façade design, ventilation paths, landscaping, etc.
By decreasing production time and expediting the iterative process, generative design breeds more time for the creative process.
NEXT: