Artificial intelligence has only shown us the endless possibilities right at the touch of our hands to create art and self-expression. For Memo Akten, Ph.D., AI became a creative tool to reflect the human condition. As an artist, musician, researcher, and AI whisperer, he has produced a number of great works from the creative interplay of natural and machine. In this episode, Memo joins us to share his very unique way of looking at AI through the many art pieces he has created over the years. From a digital art visual installation that uses digital puppetry to his most recent work called “Distributed Consciousness,” Memo amazes us with the parallels of AI to our lives. He also lets us in on his thoughts about AI overtaking humans, the influence of AI on the creative industries, the power of decentralization, and more. So tune in to not miss out!
- AI & Creativity: Using AI as a visual instrument to create a form of self-expression
- AI’s Potential: Disruptions in the creative industries in the next few years
- AI’s Impact on the Creative Process: Understanding the concept of ownership in a decentralized environment
- Will writing code become obsolete because of AI?
- “I’m interested in the whole spectrum from mystical to technical because ultimately I’m interested in the human condition.”
- “How can I make tools that allow people who might not have access to certain equipment, to still tell stories, to make films, to make animations and things like that.”
- “I wanted to build what I call instruments analogous to a piano or a guitar, but not necessarily musical instruments, but visual instruments.”
Listen to the podcast here
Intelligence: A Creative Interplay Of Natural And Machine Feat. Ars Electronica Award Winning AI Artisan Memo Akten, Ph.D.
This is Memo Akten. I’m an artist. I work with emerging tech to probe and understand the human condition and the nature of nature. I’m on Edge of AI, the show that’s emerging to be the natural best choice for learning about AI.
Here’s what’s to come on this journey. Find out about the many seasons of AI’s development through the eyes of a unique artist and scholar and the potential value of staring at waves and trees for hours on end. Find out how to get the most out of AI by imagining it’s a rubber duck. All this and more, take your seat.
Like most of you, I have embraced the spirit of exploration and entrepreneurship throughout my life. From starting my own business before graduating high school to traversing the world’s most challenging terrains, I have always sought to find new frontiers and adventures. I have conquered legal battles and built award-winning homes, and now I lead a public company dedicated to pushing tech boundaries and unlocking our full potential. Together we will navigate uncharted territories in AI, the guiding star on this quest. It’s going to be to ask great questions and that’s exactly what I will endeavor to do. Buckle up and get ready to embark on an amazing venture. Let’s set sail.
Our guest is Memo Akten. Memo has been featured in major publications like Wired, The Guardian, and Financial Times. His list of collaborators includes Lenny Kravitz, U2, Depeche Mode, Professor Richard Dawkins, Google, Apple, Twitter, Deutsche Bank, and Sony PlayStation. He’s a multidisciplinary artist, musician, and researcher, originating from Istanbul, Turkey. For more than a decade, his work has been exploring AI or Artificial Intelligence, big data, and our collective consciousness as scraped by the internet.
Thematically, he’s fascinated with intelligence in nature and machines and integrates the study of hard sciences as well as religion and ritual into his artistic practice. In 2021, he earned a PhD from Goldsmiths, University in London specializing in creative applications of deep neural networks with meaningful human control. In this field, he is considered one of the world’s leading pioneers. He is an assistant professor of Computational Art at the University of California San Diego.
Akten received the Prix Ars Electronica Golden Nica. While it’s a mouthful, it is one of the most prestigious awards in new media art, and that was for his work forums in 2013. His work has been featured internationally at a number of prestigious venues and exhibitions, including the Royal Opera House in London, Ars Electronica, and the Grand Palais of Paris’ Artists and Robots Exhibition in 2018.
He has built, advised, and consulted on projects that integrate art and technology. He resides in LA, California. Trust me, this is going to be an amazing show. The gentleman you are about to learn about is none other. He is fantastic. Memo, let’s start with your history as an artist and with AI. What first attracted you to AI and what prompted you as an artist to pursue a PhD involving AI?
First of all, thank you for that generous introduction. It’s great to be here and chat about these topics. How did I get into AI? AI is a broad topic and there are a few channels. As a kid, I was very much into the AI that we knew from sci-fi. I was a massive Arthur C. Clarke and Isaac Asimov fan. Mostly through books as a kid. Later there’s another AI, which is the AI that we are using the real-world technology of AI which is different from the robots that we get in sci-fi. I got interested in that probably in the early 2000s because as an artist, I was interested in emerging technologies and software computers. That was my medium.
I was interested in creating interactive systems, computers that could somehow understand what their human users wanted to do. I wanted to build what I call instruments, analogous to a piano or a guitar, but not necessarily musical instruments, but visual instruments. I was writing software that could “understand” what was happening in the world. I was working a lot with computer vision. Cameras attached to computers trying to understand, microphones attached to computers, sensors trying to make sense of the world. This is what AI does. AI is the discipline that is trying to imbue upon computers an understanding of what’s happening in the world.
I have been doing this. On one hand, I was an artist wearing my artist hat. On another hand, I was a researcher and computer scientist, wearing that hat. Roundabout 2014, I was quite into machine learning, which is the flavor of AI that we have that’s dominant. In roundabout 2014, I thought, “This is going to be big. I don’t want to be hacking at this. I want to understand this bottom up.” I decided to do a PhD and I embarked on my PhD in 2014. I was already deep in my artistic career, so I didn’t do a PhD for career purposes. It was personal curiosity.
From my point of view, it’s such a rare combination to have someone that expert in tech and also at his core, is an absolute true artist. Combine that with a curiosity about nature and the world, a lot of times that will come back in something mystical. I use that term for lack of a better one now. You bring it all to ground level. That’s what we are going to find out here. It’s proven to me it’s to be fascinating. Let’s review some of your work as we get going. They are beautiful and somewhat mysterious as well. Perhaps you can show us a few things and explain them. We can describe them for the readers and folks can also review them on YouTube when they get a chance.
I’m an artist. I’m interested in creating artworks and I’m interested in the whole spectrum from mystical to technical because I’m interested in the human condition and this is the human condition. We are both technical but also romantic and mystical beings.
What I found so unique about you is that often, if you separate types of people in this world and some of them are technical, “It’s got to make sense. I have got to be able to see it, touch it, and understand it.” The others are very much generalists. Usually, they don’t cross over. The way you have done that, being a ridiculous expert level on both, to me, it’s amazing. I feel fortunate to spend this time with you and be able to share it with all our readers.
That’s very kind of you. I have shared my screen. Hopefully, you can see it.
We have got it. What we are looking at is a series of images, which represent much of Memo’s art.
I’m showing my website, which is a bunch of thumbnails. I made this in 2008 and I will play this video. I will turn the sound down. This is one of the audio works and it’s a large interactive digital virtual paint wall. There are people walking up to this wall and through their movements, they are splashing all colors of the rainbow virtually and painting this wall with their body. This is one of the early works that I made in 2008. This wouldn’t be called machine learning, but it’s using computer vision. There are sensors that track the movement of the audience. This is the audience interacting here.
I’m watching individuals in front of the wall and as big as the screen is, you see this wall that is getting paint splashed all over it. What’s dictating the paint splash is the movement. You have got people that are dancing in front of it. Meanwhile, the wall is changing as they do, tied to the movements.
The technology that I’m using in this work goes decades back. A lot of the technologies in AI the roots go quite back. This was some of my earlier work that was trying to use computer vision. If I jump forward, I will show this one post-pattern recognition. This is from 2016. I will play this video. This is a dance piece. You will see on screen two dancers and robotics spotlights. Here, the robotics spotlights are also controlled by software.
You see a dark stage that would be black all around except for the dancers themselves. In this case, it looks like a couple of dancers on stage. There are all these spotlights programmed for the dancers. In real life, we don’t have your normal lighting person in the back of the audience. It’s controlling these things.
There is no light programming. They look like laser beams and they respond in real-time. This isn’t pre-programmed. There’s an AI system that is watching the dancer’s movements and it’s controlling the lights in a way such that the lights are also performers in this case. This is a duet. There are two dancers, but we also consider the light beams as performers because they are responding in real-time to what the dancers are doing. This was 2016. Also, again, my explorations into AI.
What’s interesting is they are responding to what the dancers are doing, not doing exactly what they are doing. They are interacting on their own.
We train the lights on pairings of dancers. We wanted the lights to feel like they have some intelligence, some response beyond mimicking. That was 2016. I will show you one more project. This is from 2017, learning to see. This is perhaps my more popular work. It went quite viral. I will show these images first. In these images, there’s a table and on the table, there’s a bunch of wires, rags, and cloth, things like that. There’s a camera looking down. You can see a video feed. There’s a live video feed of what the camera is seeing, the wires and cables. Next to it, you see the same scene reconstructed. In this case, it’s reconstructed in the form of a galaxy or a nebula.
This one, the camera is looking down on some cloth of some sort, maybe a towel, blanket, or something on a tabletop, and then someone moving them around. That’s on the left side. That’s all that the camera is catching. On the right side, I’m looking at waves coming in on the beach and those waves mirror the movements of the towels on the other screen. It’s amazing.
I have two motivations. One motivation is this is me playing with the clothes. On the left, you see my hands playing with some cloth and some cables. On the right, you see ocean waves in the same shape as the cloth. This is me looking at a new form of filmmaking, puppetry, and visual expression. I wanted to be a filmmaker as a kid but I had no means to make films, growing up in Turkey. This has always been an important thing for me. How can I make tools that allow people who might not have access to certain equipment, to still tell stories, make films, make animations, and things like that?
As we are watching, in the camera, you took a charger with a plugin that goes into the wall and placed it in the middle of those towels and you end up with a rock that the waves are breaking against. It’s amazing.
This is all real-time. This is happening live. This is what I call digital puppetry. This was 2017. This is using now pretty much the same technologies that we use in AI. It’s deep neural networks that are powering this particular work.
Now, he’s picking up that charger and it looks like he’s picking up fire on the other screen. It’s all flamed and fired and it looks like his hand is on fire.
Now, I’m playing with cables, and on the right, we see flowers. What I made is an interactive installation that the public could play with. It’s amazing to see how people do. It’s like crafting. People play with the cables on the cloth to craft their perfect bouquet or they bend the wires in such a way as to create a beautiful nebula. To me, this is what I would call an instrument. It’s a visual instrument to create a form of self-expression. I will stop sharing there.
For those of you who are able to see it on the YouTube feed, we will be impressed. It starts making you understand these different theories that Memo has studied, how they are coming together, and why, quite honestly. It’s pretty impressive. Let’s talk about your work, Distributed Consciousness. It’s a multifaceted artwork, expanding many themes, biology, distributed computation, distributed cognition, climate change, activism, and animal minds. Give us a little more on what it is and what it means to you.
Distributed Consciousness is one of my most recent works. I made it in 2021 and it’s quite a complex work that brings together lots of themes. I have been in AI for many decades and I have seen the waves of popularity in the mainstream. I realized roundabout 2015 and 2016 is when AI seemed to me to burst into the mainstream first. Now, there’s a second wave of bursting into the mainstream.
Even if you look at the Google News trends, the actual graph, or the term AI, it exploded in 2015. There’s a baseline and then in 2015, there are lots of articles about AI. Then in 2016, all of a sudden, I noticed lots of articles and even books about cephalopods and octopuses. I thought this was interesting. I’m not the only one to interpret this, but it’s because cephalopods are already such an alien intelligence.
The way their nervous system is so different from mammals or even birds or even reptiles. They have most of their neurons in their arms and each arm is semi-autonomous. You can cut the arm of an octopus and it can live for a few hours and it will go and hunt. It will find food, taste the food, and pass the food up the arms to a mouth that isn’t there. The arms can even coordinate with each other without involving the central brain.
Now as we are building AI, having to think about what other intelligences might be like, octopuses are a great example because they are super intelligent. They can solve problems. Octopuses are a fascinating example of another intelligence that we already share the planet with. Octopuses symbolized this other super intelligence.
On top of that, it’s roundabout 2021. Blockchain exploded. NFTs exploded. I was thinking a lot about distributed computation, which is what blockchains like Ethereum and Tezos provide. Instead of a single central server, every node on the network runs the full code and it’s distributed in that sense. I have been thinking about the parallels between blockchains but also biology and multi-celled organisms. Every cell in my body has a copy of my DNA. They specialize, but in a way, every cell is a fully functioning intelligence machine.
Again, linking on octopuses as they take this even beyond their cognition is completely distributed. There are many different types of AI that one could build. The ones that we are building now are based on big data. It’s based on scraping the internet for knowledge and then building systems around that. When we interact with something like ChatGPT, what we are interacting with is the collective knowledge that’s been accumulated over centuries and millennia and archives on the internet.
Now, it’s being reorganized and we are building new ways to access that knowledge. I can talk about this more in detail shortly, but I wanted to pause there and say these are some of the themes that work behind this work. To quickly summarize, it started as an NFT collection, Distributed Consciousness, and then it’s now an installation. I have a big installation in Australia and it’s a multifaceted work in that sense.
I do want to take another step from what you discussed because I heard you discuss this before and it’s relevant. I will paraphrase what you were stating and I will let you elaborate on it. We have always seen ourselves as the wisest, smartest, and most advanced beings. You mentioned biblical times. What’s going on with AI now and the frightening thing for a lot of people is what if it becomes more wise and more smart? You had a whole conversation on that that was fascinating. Can you elaborate on that a little bit?
Thank you for ringing that up. It’s what I refer to as the final decentering of human exceptionalism. I’m fascinated by this lineage, particularly in the Western culture, which comes from the Abrahamic religions. Particularly, it’s in the opening page of the Bible, the book of Genesis in the Old Testament, that man dominates nature and that the world is here for man to dominate.
Throughout history, the history of science, we see Galileo and Copernicus claim that the world is not at the center of the universe. This is traumatic for many people to realize that their world, our world is not the center of the universe and it can’t cause this Copernican trauma. It’s a complete change in worldview. Later, we relive this with Darwin with Origin of Species and he says that not only are we not the center of the universe, but even as a species, we are not that special. We evolved from apes but we evolved. All living beings come from the same place.
Again, it was another Copernican trauma. It was another epistemological shift. Now, we face another trauma for some. I embrace this, but the idea that people now know that we are not the only solar system. We are not the only planet. We evolved from apes, but we are special. We are intelligent. We have a soul.
Now, as we deal with building AI that can potentially be creative, it can potentially do things that we thought that only humans could do. This is traumatic for a lot of people. I embrace that side of it. There’s a lot that I’m concerned about with AI, but the idea that we are not special, that our intelligence, our creativity, and even our consciousness could maybe be modeled by a computer, I find this an exhilarating thought.
It’s concepts like this that go back to your artistry with the tech background as well. The fact that you have been part of and studying AI for as you said, decades, you are only watching what you would call the second mass-market wave. I think of this as the first one by the way but your history is deeper and longer and what you said makes a lot of sense. It’s probably the second mass market wave and the third and fourth will be even much longer. Maybe you know what those are. I do not. Having the reference points you described, it’s important to put a frame around this thing that feels uncontrolled.
Regarding the second wave, I should first also mention having studied the history of AI, this is probably already the sixth wave. Since the ‘50s, there have been many of these cycles where a lot is promised, there are huge amounts of funding, and then it fails to deliver what it promised and then funding dries up. These are called AI winters. AI summits quite famously started in the ‘50s, happened in the ‘60s, again in the ‘70s, and again in the ‘80s. It happens every decade. This is probably the eighth cycle now.
I do think that the cycles that are going to come are going to be different. Maybe I’m digressing now, but a lot of the research that’s been happening in AI. Particularly as an artist, I’m looking at the creative aspects of what’s happening with AI. If we think of generative AI, things like Midjourney, Stable Diffusion, DALL-E, and ChatGPT.
A lot of these tools come out of research companies that are doing fundamental research, trying to solve research questions. They are not tools that are being developed by companies who are trying to build user-facing products that are designed to solve industry problems. That’s not where these tools come from. These are fundamental research. Now that this research is showing so much promise like DALL-E, which was probably the first one. DALL-E, when it first came out in January 2020, blew me away. It blew everyone away.
OpenAI‘s goal isn’t to make text-to-image tools. Their goal is to build AGI, Artificial General Intelligence, and to build AGI, they need to build systems that can understand the world. That means if they see something from a camera, we humans want to be able to ask them questions about what they see and they need to be able to respond. They need to have a good understanding of what is in that picture.
This is why they developed a technology called CLIP and off the back of that, they developed DALL-E. The whole generative image revolution that’s happening now wasn’t the intent. Now that that’s happening, Adobe is investing a lot in this, but also tons of startups. In the next few years, we are going to see a major disruption in the creative industries of AI.
There is so much capital chasing AI and not having any idea where to go or what to do. This foundational understanding is going to be critical for them to have. What you said was it’s gone from fundamental research which primarily developed. All of the retail products with AI as they exist now came out of fundamental research and then you transition into tools. Someone saying, “I’m going to use that as a foundation. I’m going to develop this one tool that does this one thing and they are going to develop that.” To be at that moment in time feels so incredibly powerful.
It’s exciting. This is why I got into AI. As a small plug, I am consulting for a startup right now that is one of these companies that is designing tools for filmmakers. It’s called OZU. It’s an alpha right now, OZU.ai. It’s being built with the needs of filmmakers and the way things work in the real world. What are the needs of a producer, a director, or a film editor? Once these tools start hitting the markets, we will see the real change in how films, animations, and music get made.
As we are doing this show, there is a pretty big Hollywood strike going on at the moment. I believe AI is at the center of a lot of the discussions they are having, trying to pull it all back together. I’d love them to be talking to you because you are multiple chess moves ahead of that. The things you might be arguing about now, maybe you need to be a couple of moves ahead because here’s where it’s going, and the way the snowball is going down the hill is going fast. You will have to contend with those things immediately. Do you have any comments about Hollywood and what’s going on there in regards to the AI influence?
This is an important point that you bring up. Thanks again for bringing this up. I’m an artist. I’m a creator first and foremost. I’m only interested in creating tools that help me with my workflow to do things that I want to be able to do that I can’t do. I should also say as I touched upon growing up in Turkey, I didn’t have a lot of opportunities to do things. For me, there is a side of AI tools that are democratizing that one person and a computer can maybe make a film. We already have that with music in a way. Ableton Live did that. It allowed somebody in their bedroom to become a music star.
I still miss the big mixing board that was 12 to 15 feet long and had countless channels but you are right, they are outdated.
We still have that. There is an opportunity here to democratize the act of creation. On the flip side, I do want to talk about the issues that are being raised about the shift of labor and particularly exploitation. It’s shocking to me how much, in the last few years with Stable Diffusion, this, that, the other, and these are models like Midjourney and Stable Diffusion that are trained on the work of millions of artists scraped from the internet without their consent.Through AI, there is an opportunity here to democratize the act of creation. Click To Tweet
Famously, you can type in Midjourney, “I want a painting of a dragon in the style of Greg Rutkowski, a well-known fancy illustrator.” There is a lot to debate about here. I have a lot to say about this. One thing is I find it so shocking how identically almost it mirrors the Luddites’ movement in the early 1800s. Now I don’t know if the word Luddites is common in the US.
Start from basics here. Educate me.
In the UK, the word Luddites is quite a common word. It’s a derogatory term used against people who are anti-technology. If people are anti-technology, they are called Luddites. This word comes from a movement that happened in the early 1800s with the invention of things like the Jacquard loom, which was an automated weaving machine that was seen as the precursor of computers.
When these looms were invented, they cut the work of all the people who were using the previous older looms. They were out of work, protested, and were called Luddites. We have got quite a violence. They protest. They would even attack factories that would buy these new looms and then set fire to them and the government would come and kill them and shoot them. It got violent.
The interesting thing is this. What the Luddites were complaining about was that the added value that these machines were bringing was going to the factory owners and not the factory workers. What was happening was these were skilled artisans. The people who used the previous generational looms were skilled artisans and they were being replaced by unskilled laborers. In a way, this is what’s happening with AI as well.
If, for example, what had happened was the value that was being generated by these new improved machines was shared with the workers, then there might not have been this revolt. As an artist, this is first and foremost for me. I don’t want to automate myself out of a job. I don’t think that’s the goal. The startup that I’m consulting for isn’t trying to do that. It’s trying to make things easier, more enjoyable, and more creative for creators. There’s a big discussion to be had here, but the discussion is around how businesses operate, how businesses integrate these technologies, and how they manage their workforce.
Which is ever-changing was the word I was going to say but it’s even more than that. A lot’s happening now because of all this. It’s going to be interesting to watch it roll out. You contributed a publication titled Art and the Science of Generative AI: A Deeper Dive. You are exploring AI’s impact on creative processes and potential implications for art, culture, and society. Tell us how you became involved in this particular publication and some of the key takeaways. You have primed us for that with some of what we have already talked about.
This is a perfect connection to that because this paper is exactly talking about these issues. There are two papers. One is Art and the Science of Generative AI: A Deeper Dive, as you mentioned. That’s an archive. There’s the website, Arxiv.org. That’s truly accessible and that’s a long paper. That’s many pages. We wrote a much shorter version for the Journal Science, which is quite a major journal. In that journal, we have a summary, which is a much shorter version and it’s quite a big group of authors. There are 10 to 15 of us.
I was contacted by the lead author, Ziv Epstein, who’s a PhD student at MIT and he wanted to do this review paper. It’s examining exactly this question that you asked. We have this generative AI. How is it going to disrupt both positive and negative, society, ethics, law, the way we understand ownership, IP, legal issues, culture, aesthetics, and all of these things? That’s why it’s such a big team of authors because I’m not a legal scholar so I don’t speculate on how it affects law or IP, but we had IP lawyers. We had people from Harvard or various other places who specialize in IP law. They contribute to that aspect of it. My contribution was around aesthetics, arts, and culture relating to that.
It dovetails a little bit into the term I got from you, which is amazing, which is the nature of nature. When you look at some generative AI, who owns the IP? Is it the idea that went into the AI that helped the bot utilize it? Maybe I create something and someone takes that and morphs it, at what stage? The nature of nature is reflected in this that we have to deal with. Now I don’t expect you to have the answers, nobody does. These are the big questions that are going to be coming at us quickly.
I was quite surprised because we sat around and had these big author-writing meetings. I was quite surprised to see that these top lawyers and law scholars do not have a definitive answer to this. They are citing precedents and different cases, but ultimately, it seems there isn’t a clear answer now. I do urge people to go and read the paper, the version on Arxiv. That’s Arxiv.org, which is where a lot of machine learning AI papers are published for free. You can read it freely and it goes in depth about any concerns people have about the legal concepts of ownership.
One thing I will say from a speculative point of view, it isn’t in the paper because the paper is rigorous and doesn’t go into speculation. We didn’t speculate in the paper. I’m going to speculate now that the concept of ownership needs to change with what’s happening. Even going back to Distributed Consciousness, when you create something with say, Midjourney, it’s a collaboration with everybody in the world, particularly with Midjourney because Midjourney is on Discord. It’s this public chat room. You can go to the chat room, see what people are creating, and you can branch off. No one is the author. Everyone is the author. The concept of authorship is going to change.
The change rolls in for those of us who come from the blockchain world. I always felt like if you didn’t get it when you were much younger, it took a while to understand it. I used to say that you have got to forget everything you ever knew to truly understand it. It’s the decentralized world versus the centralized world. A lot of people who are not as familiar may think, “If someone can’t own it, then you are not going to have an incentive to develop it.”
We are used to these big entities and big conglomerations pouring a bunch of resources and developing great things. Certainly not dissing any of that, but that’s the thought process behind how we have grown in the past. What you described is a decentralization. Everybody is doing it and they can carve out their return of value however they want to define that. You have to make that mental transition to say, “This decentralization is something and it’s worth having grow.” I wouldn’t say switch to, but I would say grow and develop.
That’s an interesting point you touched upon. This new model of creation is not even necessarily new because creation has always been a shared thing. In ancient times, or even now in many cultures, there were storytelling cultures, vocal cultures, and stories passed from generation to generation. They evolve and then they morph along the way. None of this is new, but it’s being computationalized.
Interestingly, blockchain could capture this. I don’t think the blockchain apps have been developed to that extent but you can imagine a system where every action that someone does is somehow tokenized. I do something and then someone branches off that, that is tokenized. At the end of the chain, something is created, but we have a full record of every contribution from everybody along that chain.
It could even be thousands of people who contributed to the creation of this image and they don’t even know they contributed. I upload an image and someone takes that image, takes my prompt, modifies it, and creates a new image. I don’t even know if they took it, but it’s recorded on the blockchain. There is this full chain of transactions. Oddly, that advanced Web3, if the whole internet was truly decentralized at its core, then it could support this new authorship.
You get the benefit of crypto because there can be micropayments and those payments could be as small as a penny or even smaller than that. You can attribute the value and it’s all smart contracts. Hopefully, I’m not speaking Greek to some of the people out there, but it’s all scripted in. You choose to do something on that blockchain with that and it’s connected to a wallet of sorts. The wallet will probably look different than they do but it’s connected to the wallet and these payments come in. People do end up incentivized and they end up rewarded and they end up maybe not owning the whole of something, but they get some value from the contribution that they made. That is the power of decentralization, which we have got to get our arms around.
I have friends who are building systems like that. A good friend of mine, Tim Exile, is building a music platform, Endlesss. The idea is that someone uploads a drum track and someone else takes that and remixes it. Someone has a baseline, someone puts that in a phrase and 50 things down the line, someone makes a track and sells it. It’s these fully smart contracts all the way down. Everybody who contributed all the way to the person who uploaded the original drum track, which got remixed, gets their micropayments.
There’s nobody that could account for them out of that. It’s all real-time. They get paid instantly. Think about all that middle ground it eliminates and that’s why they call it the trust machine. Let’s talk a little bit about AI writing code because we segued into this anyway. AI does help you write code. Talk to me about what’s next there. How does it work and where do you think it’s going?
I write software. That’s my medium as an artist and as a researcher. I program. I write code and there are plugins that are based on the same technology as ChatGPT that do code completion. We have always had code completion, but you are writing maybe a function. There’s a function called print. You write PR and it suggests print. Now code completion using AI is insanely next level. It’s blowing my mind. I wrote a comment. I say, “I want this. This next function is going to load this file from there. Do this. Reference that. Do that.” It writes a page of code that does that. It even knows to reference the correct variable names and the correct function names that I have in other source files.
An interesting thing is many people say writing code will become obsolete. Instead, we will talk to computers. That’s a tricky thing to say. Arguably, as someone who’s been writing software since the ‘80s, I started programming at the age of 10. I have been through many different programming languages. One of the programming languages I learned early on was Assembly and Machine code. When you are writing in Assembly, you are speaking in the machine’s language. That is the machine’s language. You are shifting registers. You are saying, “Put this bit there. Shift that. Push this. Pop that.” That’s the machine’s language.
We will move to working with natural language, which will meet everybody, including non-programmers, who will be able to effectively program but we will always need people who understand the Machine code. It’s like we will always need car mechanics. I don’t need to understand how the engine of a car works. I can drive it. People have designed this interface with a steering wheel and pedals and nowadays, cruise control and autonomous. However, there’s going to need to be mechanics. Maybe one day mechanics will be robots, I don’t know.
However, you won’t see that in the next five years.
I do see programming via natural language like English being completely normal so that everybody can program using English but I don’t see lower-level programming being obsolete in five years.
I am glad to hear that. The touch points at this point are important because things are moving fast and we have got the traditional world and the regulatory world and we have got a lot of things to catch up with to be able to direct this.
I should quickly also say, the history of AI goes back to many centuries, but even with the term AI was coined in 1955. Since the ‘50s, there has been this pattern of overpromising and huge smart people, like Marvin Minsky who was a professor of AI at MIT, saying, “Within five years, we will have AI doing the dishes.” They were saying that in the ‘60s. “Within five years, we will have this.”
Geoff Hinton, a famous AI researcher, and pioneer from the ‘80s. He was at Google for many years. It was in 2014. He famously said, “By next year, we won’t need radiologists.” Elon Musk’s been promising full self-driving cars by the end of 2023 since 2013 or 2014. There’s a thing called the tail problem, which is it’s relatively doable to get 99% of a job with AI. People get excited. They think, “If we solve 90% in 2 years, maybe we can solve the next 10% in 6 months.” It turns out the final 1% can take decades, even though the first 99% took a few years. Saying something will be obsolete is a bold statement.
With your comments about that in the future, which is a great perspective, you have got a unique experience in this subject. When I say unique, I might mean it literally but if not, you are almost unique. If you predicted 2023, is what’s happening now what you expected, and then maybe what are you surprised by that did happen?
I can say this. Throughout my journey, there’s always been this idea in many people’s minds that “creative jobs” will never be automated. It will be laborers who will be automated first. I never bought that. The real world is difficult. Building general-purpose robots is difficult. Creative jobs or what’s thought of as creative jobs are going to be quickly automated.
By this, I’m referring to all the creative sectors, people who write, people who do video editing, people who do book layout, design, illustration, 3D animation, scripts for films, and music, because a lot of this is formulaic. It feels creative, but especially when you look at what’s happening in the mainstream, it’s so formulaic that I have been expecting these creative sectors to be automated. I should also create a small digression that I don’t think a lot of these works are even that creative. Creativity is something independent of what we associate with creative sectors.
A lawyer or a doctor can be creative. Physicists and scientists have to be creative to come up with theories. Creativity has nothing to do with art. Art can be formulaic. I have been expecting a lot of disruption in the creative sectors. Some of it hasn’t come as soon as I was expecting it. I was expecting a lot more disruption in writing and design but it’s happening now. I was expecting it sooner.Creativity has nothing to do with the arts. Arts can be very formulated. Click To Tweet
I was not expecting the level of quality of the image synthesis that Midjourney is able to do. That happened quickly. That started with a technology that was born in 2017 called Transformers, which is an architecture for neural networks. That changed and sped things up with regard to how large models we can build and how much information we can store in a neural network. That progress happened quickly and that was Transformers. All of a sudden, the models grew.
We needed these last decades. I will use the term server farms. Before they existed, this was a lot of data they were all pulling from. We had to have that infrastructure for this to work.
That’s no coincidence. The fact that flavor of AI that we have because there are many different types of AI that one could have. Throughout the history of AI, there have always been conflicts between the big egos that were promoting different kinds of AI. For example, another AI is based on symbolic reasoning and it might not even need a lot of data. You give it some problems to solve and it uses reasoning to solve those problems.
That’s not the AI that we are building or that we have been building for the last many years. The AI that we have been building is using artificial neural networks, but in particular, they are called deep neural networks and they are called deep because they are massive. They are massive because they are designed to deal with big data. That’s the version of AI that we have because that’s what’s being invested in because that’s what’s needed.
Since the birth of the internet, we have been accumulating data. The Googles, the Facebook, the NSAs, the GCHQs, they have so much data, they don’t know what to do with this data. No human can read this data. They need algorithms to pass that data and make sense of it, which is why we have this form of AI, which is a specific type of AI. It is no accident. That’s why this AI follows the decade of big data.
Here’s what I’m hoping and it already exists. Earlier, you said doctors and lawyers can be creatives. I’m not going to go against that because I believe it’s true. If rules dictate everything they do, they cannot be. I’m only picking out doctors and lawyers because that’s what was brought up. It is cast anywhere. It was rules. We used to operate a lot more on principles. We had less rules and more principles. For me, there were parts of our society that were much better because of that. As we go into this level of computerization, it makes me wonder, can we get back to principles for people to think things through instead of looking at a rule book and saying yes and no? Sometimes even though it follows the rule, it’s not necessarily to do.
Those are some of the things that came to mind as you were talking about that and some of the changes. You said something before also, and I found it fantastic, so I’m going to repeat it as best as I can. The internet was the first step in cataloging information. Search engines were the next step in searching for that information. GPT was the first step in organizing information and retrieving that information. Talk about a foundational concept to understand what’s going on. I find that to be in its simplicity, brilliant.
This is a big thing. It goes way back. Writing on stone tablets was maybe the first step in externalizing our memory. This idea of externalizing cognitive processes goes back tens of thousands of years. In the Digital Age, let’s say the internet was a huge thing in terms of collecting information and archiving it. I remember the early days of going on the internet and there were no search engines. You had to know an FTP address. You would have to enter the FTP address of what you wanted FTP to. You could look at the files or the directory. Search engines were needed. Now, we are beyond those search engines and what we have with things like GPT, they are search and synthesis engines.
They don’t retrieve the stored data. They retrieve the knowledge because they are able to synthesize it. They are also able to famously hallucinate, which is a bit of a problem now. It’s an open problem in the research, which means they make stuff up. An example I will give is I use ChatGPT daily to find answers to questions that I can’t find any other way. This is an action example saying, “What was that film where there was that woman who wanted to kill her husband because her husband cheated on her?” You describe it in a way that Google wouldn’t be able to return results, but ChatGPT can.
I did this with a book where I said, “What was that book where it talks about this particular thing that happened?” ChatGPT said, “It was in this book.” I’m like, “It wasn’t that book because it wasn’t about that. I don’t remember what it was about, but it had this thing.” It said, “Maybe it was, in this particular case, the book Chaos by James Gleick. I could go to that book, search, and, “Yeah, it is that.”
It’s a way of accessing information, but also, I have conversations with biologists and physicists via ChatGPT, not real physicists, but if there’s a topic that I don’t fully understand and reading the Wikipedia entry or watching a YouTube tutorial doesn’t help me. I have questions I need to ask someone. I asked ChatGPT and it might make stuff up. I know this but I’m able to ask it questions and it gives me answers and then I can verify those answers, and then I can understand the Wikipedia entry because, without that conversation with ChatGPT, I can’t even understand the Wikipedia entry. After the conversation, then I can make sense of it.
It is critical that you verify. A lot of you may have heard this in court, some attorneys submitted a brief and the judge ended up sanctioning them because as it turned out, they went to ChatGPT to write it and it cited cases that were imaginary. They didn’t even exist. When the judge asked them, “What are these cases? They don’t exist.” They confessed they use ChatGPT and he sanctioned them because of it. If you don’t want mud on your face, make sure you double-check everything.
This generation of technologies makes stuff up so you have to verify. I don’t think this is a problem that cannot be fixed. With the new generation of technologies, it’s difficult to fix but it will be naive to say that in five years, we won’t have something like ChatGPT that is able to not make stuff up. We will. I owe a lot to Khan Academy. I have done all the classes at Khan Academy. It’s a fantastic resource. They are working with AI as well to introduce this teaching assistance.
That was a great add. I’m glad you brought that up. We are going to move to our next segment, but before we do that, do you have any thoughts that went through your head during this whole segment that we missed or you want to say?
No, we covered quite a lot. It’s been a fun chat so far.
It’s fantastic. You are fascinating, doctor. It’s time for AI Wants to Know. AI is curious and so are we. These are ten quick questions designed to uncover the intriguing mysteries that AI longs to comprehend but can’t quite grasp. It’s a snack break in our journey. Keep your answers quick, but the safety belt sign, it’s also off. Let’s explore more of who you are and what makes you tick. Are you ready?
Let’s go for it. What’s the first thing you ever remember being proud of?
As a kid, making a thing out of Lego that I saw in a cartoon where you press a trigger and it extends this scissor mechanism to grab something. I wanted that and I made it out of Lego and it felt great.
What do you need help with that you wish you did not?
What do others often look to you for help with?
What do you treasure most about your human abilities?
I try to empathize with radically different perspectives.
Throughout your whole life, what is the most consistent thing about you?
Curiosity and trying to learn and understand everything that I come in contact with.
Throughout your whole life, what has changed the most?
I don’t try to do everything anymore and I’m happy to delegate.
What do you find strangest about reality?
Everything. That we might be in a multiverse. That we are fluctuations in quantum fields. That we are conscious. That we are away from the universe to know itself.Everything that we might be in a multiverse is just fluctuations in quantum fields, we're conscious that we are a way for the universe to know itself. Click To Tweet
Sounds like we have to define the word reality. That’s a big job. When do you remember feeling alive?
I have started sailing. I have been in a Laser, which is a single-person Olympic class thingy, and hanging off the edge, racing through the water. It’s pretty incredible.
When did you move to Los Angeles?
February this 2023.
You are in LA. You are close to the Marina. Fabulous.
I’m in the Marina, so maybe you can hear sea lions in the background.
We like it. Don’t swim with them. What’s your most unique trait?
This is a hard one. I spend a lot of time and by a lot of time, hours a day staring at trees, mountains, or waves doing nothing but letting my mind wander.
Question number ten, if you weren’t human, what would you be?
As much as I love octopuses, I would probably choose to be a bird that soars like an albatross.
I’m going to add this one because I brought it up earlier and it’s worthy of you defining it a little bit. We can keep it fairly brief, but I want to make sure you get the core of the meaning out. What do you mean by studying or looking at the nature of nature?
That’s a fundamental question. Thank you for that. I mean trying to dig as deep as possible into every phenomenon that I come across. The example that I talk about is we can look at a flower. In this particular example, I’m paraphrasing Richard Feynman, the Nobel laureate physicist here. You think of a flower. It’s beautiful, but that flower represents so much.
It represents, first of all, a plant that photosynthesizes, which provides life for everything on the planet, which comes from the nuclear fusion in the sun, where hydrogen atoms are under pressure from the gravity of fussing into helium. Those UV rays go into the plants and this is what feeds the entire energy cycle of life. Also, the flowers evolve to attract insects, to pollinate them. This is what I mean by nature. The nature of nature is everything that makes everything work and with all the different disciplines and facets of it.
I’m so glad we got to that because it is fundamental and foundational for all that I see that you are in your artwork. It’s powerful. We are going to go to the next segment now, which is AI Leaders and Influences. This allows you to highlight some of the leading individuals, projects, and organizations that influence you or that people might want to follow.
A lot of the people that I have been following are researchers and they are all pretty well known. I have been a big fan and follower of Jeff Hawkins for many years. He wrote a book in 2002 called On Intelligence. He’s the Founder of PalmPilots, by the way. He founded PalmPilots as a way to fund his brain research and he’s doing brain-inspired AI research. It’s not the AI that you see in the news, but he’s trying to understand what his company, Numenta, trying to understand how the brain works, and then builds computational models of it.
I have also been a fan of Jürgen Schmidhuber who’s an academic and he runs a lab in Germany or Switzerland. A lot of the work his lab’s been doing or even was doing in the ‘90s has decades later hit the industry. I see him as someone who’s decades ahead and whatever he’s working on now might be what we are running 10 or 20 years later.
I have also been a big fan of Lisa Feldman Barrett, who’s not so much an AI researcher, but more an experimental psychologist, but she studies emotion a lot and the role of emotion in regulating nervous systems and biological beings and how that might cross over if you want to build AI systems that are fully autonomous, how an analog for something an emotion or a body might need to be there.
I have also been a big fan of Judea Pearl who does a lot of research into causality and reasoning. Again, not in the mainstream AI, but fundamental questions. What does it mean to understand causality? This is something that current systems don’t even begin to get to. Probably a bit cliché, but I do like the work that DeepMind is doing in terms of the problems that they are attacking.
They attack the protein folding problem, trying to predict what food structure protein will get out of a chain of amino acids. They made some progress on nuclear fusion, trying to stabilize plasma in a nuclear fusion reactor. These are the things that could go beyond, “Here’s a nice tool,” but propel humanity to another level.
As far as artists go, Lauren McCarthy is one of my favorite artists. She does a lot of work around technology in general, but lately also a lot with AI, how it affects how we relate to each other as humans and also how we understand ourselves, and the impact of these technologies on society and culture. These are a few names. There’s lots more.
This is a great direction for many of the readers who wonder where to start. It’s easy to get lost in YouTube and not necessarily utilize your time to the biggest advantage or even get the correct information. This list becomes important. I will call you Dr. Akten now because you have earned that. It’s so tempting to call the artist that I see as just Memo but you did the work and you are a doctor, so you get that just due. How about resources? We talked about some of your influences and people have some directions to go, but how about specific resources, whether it’s books, podcasts, newsletters, or anything else?
For me, it’s mostly Twitter. It feels like everything has been happening on Twitter. I mostly follow the people I mentioned and tons more researchers, the heads of Labs, Yann LeCun who’s one of the big pioneers from the ‘80s who runs Facebook’s AI research, Sam Altman who runs OpenAI, Demis Hassabis who runs DeepMind, but also lots and lots of other researchers and in different disciplines of AI as well.
Off Twitter, there are branches to Reddit, YouTube, podcasts, and various episodes but the central place for me is Twitter. I have spent a lot of time on MOOCs. I mentioned Khan Academy before. Khan Academy doesn’t go too deep into AI, but a lot of the basic linear algebra foundations are great. I have done all the MOOCs or Coursera. Is it Udemy? Even on YouTube, you have the full Stanford lectures on Computer Science.
You mentioned some of the MIT courses that are available, at Berkeley and Stanford. Even in some of the big units, you are able to use a MOOC.
It’s fantastic. Before I did my PhD, I didn’t have a Computer Science background, so I binge-watched for 1 or 2 years all of Berkeley’s m MIT Stanford’s Computer Science lectures on YouTube for free. It’s mind-boggling. I love it.
I have no doubt that Edge of AI is your number one and favorite, but if you had to pick a second one, what would it be?
I watched a lot of the Lex Fridman podcast, particularly before he became the Lex Friedman podcast. It was a set of lectures at MIT around general art intelligence. The first 20 or 30 people he brought on were purely AI people and I discovered a lot of people through that lecture series. I watched pretty much all of the early Lex Friedman before it was the Lex Friedman AI podcast. The Edge of AI will be the primary going forward for sure.
Let’s head into AI Tips then, and thanks for all that. The readers are going to get a lot of value out of those lists that you delivered. What are some of the cooler ways you are using AI given your unique perspectives on AI? Any tips there that we haven’t talked about yet?
We did touch upon them like the fact that I use ChatGPT to search for knowledge a lot, search for things that are easy to verify. It’s easy to verify if the film that I was looking for was the one that it suggested. Things are easy to verify but difficult to find. ChatGPT is brilliant for that. I also use it for rubber ducking. Rubber ducking in computer science is a term where when we write code and it doesn’t work, we call a friend, “Why doesn’t this work?” By the time we explained it to them, we found the bug. It’s the act of explaining sometimes works. That’s why programs will have rubber ducks on their desk and they will explain their problem to the ducks.
This can be generalized beyond programming to any kind of thing. I use ChatGPT as a partner to work through ideas. As an artist, “I’m thinking about these themes, what should I read?” I will recommend this. I ask, “Does that reading suggestion cover these topics?” It might suggest something else. I’m like, “What do you think about this?” It will say, “So-and-so already thought about that.” It’s an amazing brainstorming partner that makes stuff up so you have to verify.
Back in the day, I remember crossing that bridge where we started using Google and someone said, “After you ask the first question, go a level deeper and ask them another one and another one.” That seems simple now that we talk about it. However, at that time, it was a way to get some real in-depth information. What you described is that’s the way we need to think about GPT.
It’s, “How detailed can I make this question? What’s my next question and next question?” It’s capable of doing all that. I will call it a hack, but an obvious one. I can’t thank you enough, Memo. This has been intriguing and fascinating. Where would readers go to learn more about you, follow you, or any of the projects you are working on?
My website is Memo.tv. I’m on the socials as well, but I’m different things on different socials, which makes things complicated. From there, you can see my Twitter and my Instagram. I’m on Bluesky, I can’t remember what my Bluesky is. I haven’t posted yet, but mostly Twitter, Instagram, and my website, Memo.tv.
When you get to a point where you can go to the website, you are going to be intrigued. His artwork is amazing. With a background of now understanding who Memo is, when you look at the artwork, it starts making a ton of sense. It’s fantastic. I can’t thank you enough, Memo, for being here.
Thank you. It’s been great. It’s been a pleasure chatting. Thank you for the brilliant questions.
It’s time for another safe landing at the outer edges of the AI universe for now. On behalf of our guest and the entire crew, I’d to thank you for choosing to voyage with us. We wish you a safe and enjoyable continuation of your journey. When you come back aboard, make sure to bring a friend. Our starship is always ready for more adventures.
Head over to Spotify or iTunes now. Rate us and share your thoughts. Your support and feedback mean the world to us. Don’t forget to visit EdgeOfAI.xyz. To learn more, contact us on all major social platforms. Search for EdgeOf_AI and join the exciting conversations that are happening online. Before we sign off, mark your calendars for our next voyage where we will continue to unravel the mysteries and advancements of AI. Until then, bye-bye.
- Memo Akten
- YouTube – Edge of AI
- Distributed Consciousness
- Origin of Species
- Stable Diffusion
- Ableton Live
- Art and the Science of Generative AI: A Deeper Dive
- Journal Science – Art and the Science of Generative AI
- On Intelligence
- Spotify – Edge of AI
- iTunes – Edge of AI
- EdgeOf_AI – Twitter