Edge of AI

Hollywood, Trust, And The Future Of AI: Edge Of AI Launch Party Insights

EOA Launch Party | Edge Of AI

 

AI is booming. It is a reality we all must face and move with. But as with all things revolutionary and evolutionary, we have to confront a blend of opportunities and uncertainties. The Edge of AI deeply probes into the promises and questions AI brings. In this special launch party episode, we are joined by some of the leaders in the AI technology boom to share their thoughts about the current state and future of AI. We get to hear not one but two sets of interviews. Together with our very own Ron Levy, we hear from Ramsay Brown of The AI Responsibility Lab, Jyo Deshmukh of USC’s Center for Autonomy and AI, and Chris Coughlan of Artium. They dive deep into public safety, AI regulation, efficiency, and innovation. We also get to learn about the intersection of AI and the creative industry from Josh Kriger’s interview with Les Borsai of Wave Financial and Rachel Joy Victor of FBRC.ai. They shed light on the pros and cons of AI and where they sit in the context of the recent writers’ strike in Hollywood. AI has undeniably already embedded itself into our lives. Learn more about what it holds for us today and tomorrow, starting with these conversations.

 

Key Takeaways:

  • AI’s Dual Nature: AI presents both opportunities and uncertainties, driving evolution in entertainment and beyond.
  • AI & Creativity: Leaders explore AI’s impact on public safety, regulation, and innovation, probing its role in shaping the future.
  • AI’s Integration: AI intersects with creative industries, offering unique content generation while challenging established norms.

 

Quotes:

  • Ramsay Brown: “The concerns are completely merited and we are completely warranted to be fundamentally concerned about the direction and trajectory of AI technologies as they are being developed.”
  • Jyo Deshmukh: “AI has evolved in various ways and ChatGPT is one of the products that we see that is in the public imagination.”
  • Chris Coughlan: “If you develop software the right way, then the outcomes are going to be the good thing that we are looking for.”
  • Rachel Joy Victor: “Generative AI is able to output content in a unique way because it’s built on large data sets that don’t need to be labeled, so it’s able to spit out a large amount of content.”
  • Les Borsai: “The cool thing about the whole space is the imagination you can have and how you can disrupt.”

Listen to the podcast here

 

Hollywood, Trust, And The Future Of AI: Edge Of AI Launch Party Insights

We’d like to extend a shout-out to our sponsors whose support and contributions have made this show possible. First, we have Artium. Artium is an expert team of software engineers, developers, and craftspeople, combining the latest in software intelligence with the expansiveness of human creativity to create high-craft technology that helps push their client’s businesses forward.

We also have AI Podcast Lab. They empower you to advance your podcast and business with the help of AI serving as your very own private studio and research facility. Let’s get this show on the road. Joining me on stage are all leaders in the AI technology boom that’s going on. Ramsay Brown is the Founder and CEO of The AI Responsibility Lab and Mission Control pioneering platforms for trustworthy generative AI with a career spanning nanotechnology, brain mapping, behavioral engineering, and AI safety.

Ramsay’s expertise in AI has led him to collaborate with Fortune 500 companies, governments, and militaries, making the future actionable and approachable for leaders. He drives the acceleration of quality, velocity, and trust in AI for the world’s most successful brands, cementing his position as a notable figure in the AI industry.

We have Jyo Deshmukh. He is a distinguished professor, researcher, and co-director of our very own USC Center for Autonomy and AI. With a focus on the intersection of formal methods and machine learning, Jyo’s research group investigates the safety, explainability, and trustworthiness of AI and machine learning-enabled software-controlled systems, particularly in the context of cyber-physical systems. He has made significant contributions to mathematically analyzing and verifying the safety and reliability of critical systems, including self-driving cars, medical devices, unmanned aerial vehicles, and more, cementing his position as a leading figure in AI.

We have got Chris Coughlan, the Chief Business Officer at Artium. He leads the client development team and strategy driving growth and success for the company and its clients. With a career marked by building and developing strong businesses through sales, customer success, product development, and strategic planning, Chris has honed his skill in effectively marrying vision and strategy for AI-driven solutions. His experience in understanding business challenges, developing innovative solutions, and delivering exceptional products and services through cutting-edge technologies, including AI, has been instrumental in driving success for clients and organizations across various industries.

I am Ron Levy. I am the CEO of The Crypto Company. We are one of the first publicly traded companies in the crypto blockchain space. I also have a background in growing companies and business operations. We also run an education company that teaches blockchain to many Fortune 500 companies.

We all know that AI brings promise to every industry but questions remain about how much autonomy is acceptable and how we manage public safety in the face of potential nefarious use cases. Let’s delve deeper into that part of the conversation. For that, Ramsay, I’d like to start with you on this one. Let’s get real about the worst-case apocalypse scenarios that are covered in the media all the time. Is there any merit or anything there that should concern us?

I’m glad that you are giving me the easy softball questions first. I appreciate that. I will spare the joke about do you want the good show answer or the real answer. The real answer is that the concerns are merited. We are warranted to be fundamentally concerned about the direction and trajectory of AI technologies as they are being developed.

My organization looks at these from the perspective of three timescales. If you go out to the 2035 timescale, 2026, and 2023, you will find the real cause for some consternation and the desire to do this right because we live in a civil society. We are not abstract actors worrying about someone else’s world. This is our world. We have to live in it. We as leaders are responsible for which direction it goes.

Briefly, we look at the 2030 or 2035 timescale concerns that align quite well with Jyo’s research of if you have autonomous systems that are capable of making their decisions in real-time and you have given them a set of instructions about what to go out and do in the world, will they allow you to do things like tell them what to do if they become even moderately smarter than a median intelligence human? Turn them off and interfere with their ability to get the job done when they have been told things like you need to go get this job done.

Ostensibly turning them off interferes with their ability to get the job done and they might view that as a challenge to overcome in the same way they might move something out of their way to get to an objective. Can we adequately align their behavior with human values? In my desire to get here on time on my bike ride from Marina del Rey, I didn’t run stoplights or run over pedestrians because I value following the rules and not hurting people.

How do we encode the fuzzy parts of human values into machine systems? To Jyo’s credit, it’s part of his research. It’s very exciting. If we need to modify them or disable them, will they allow us to do things like this? These are not only interesting philosophical questions. Google DeepMind has looked at them and said, “It’s not that we are unclear. The answer is probably no.”

To Jyo’s credit, why this work is so important in autonomous safety critical systems? The new best state-of-the-art understanding is that once we turn things on that are moderately as smart or smarter than us, our chances of meaningfully staying in charge of the situation approach zero pretty fast unless we have drastic new interventions.


EOA Launch Party | Edge Of AI
Edge Of AI: Once we turn things on that are moderately as smart or smarter than us, our chances of meaningfully staying in charge of the situation approach zero pretty fast unless we have drastic new interventions.

 

 If you know that to be the case, you start building safety-critical systems. What does that mean for 2026 then? If you have met a CFO, a VP of Finance, or somebody responsible for building the profit and law sheet for a company and whom to hire and you have met them and you understand the incentive structures they are under, their responsibility is to do right by the firm, not to do right by the laborers that the firm employs.

Their job is to maximize value for shareholders and the board, not maximize employment. If we are entering a world with a generative AI where AI systems can do many of the types of jobs that we do, your CFO is going to have to do a very hard piece of math in her head. What she thinks is the interesting trade-off between cost synergies on labor but we need to keep people.

An appreciation of the incentive structures that financiers, boards, and publicly traded companies are under, spells out some incentive structures that we might not be comfortable with. Some of the decisions that we make around that are going to determine what employment looks like in the following years. The back of the napkin numbers from Stanford is what I should ask. If everyone in the readers, do you use a MacBook for a living to do your job or a PC?

Raise your hand if you spend your time staring at email, Slack, or MS Teams. If you are on Zoom calls saying, “No updates from me.” You don’t deliver babies, flip hamburgers, dig ditches, pave roads, and fly airplanes. You are a knowledge worker. I’m a knowledge worker. Stanford University says 6 out of 10 knowledge workers by 2026 seeking retraining is the plate term of art they use, meaning structural unemployment due to automation.

If we rewind it to what the risks look like, we are walking into an election hotly contested with one can’t even decide on whether or not the election was real or not, for which we are walking into a world where the synthesis of voice, video, and dialogue are automatable. This is the election in which every person here is going to get a two-way phone call from Joe Biden and Donald Trump. If that is not the thing that undoes the fabric of civil society and our ability to be in consensus reality, I don’t know what it’s going to take. All of these things.

If you know they are coming and you have an adequate 6 to 18-month warning period, you start building safety-critical research and applied technologies to be able to do that. When I look at these risks, they are there. They are real and they warrant us pouring capital into solving them such that we may continue to live in a society that has our virtues and values.


EOA Launch Party | Edge Of AI
Edge Of AI: These risks are real, and they warrant us pouring capital into solving them such that we may continue to live in a society that has our virtues and our values.

 

 It’s incredibly well described but for the solutions and the problem, one thing that comes to mind that’s discussed is both of you are doing the research and the development of the protections, I will call it. You are not waiting until there’s a problem. You are looking around the corner and doing it on your own. That’s the part of the industry I want the public to start seeing because AI is becoming super exciting but we don’t want it to become an evil word.

There are 3 executive orders on the books, 32 federal mandates, and 120 state laws beginning either past or proposed on the books for regulating data and AI. You already live in a world where the powers that we are taking are extremely serious. Every major government and trade organization has come to understand that protecting both profit and the structure of civil society depends on our ability to harness this technology in the next few years. Everybody is already on board.

We are going to get into that, the regulators and then the builders, and see how that’s marrying up. There’s a difference in speeds in the way we operate in it. Jyo, I want to go to you. Your research covers a lot of the most promising uses for AI from self-driving cars, unmanned aerial vehicles, and medical devices. Those are three very powerful sectors right there. All of these have serious risks around human life and critical infrastructure. How do these types of use cases in particular impact the broader safety conversation? Are we mixing apples and oranges when we talk about ChatGPT versus self-driving cars?

Let me say that the answer to this question is both yes and no. Let me first tell you why comparing ChatGPT to self-driving cars is like comparing apples to oranges. Ramsay is probably a much greater expert in the field of ChatGPT than I am but generative AI fundamentally relies on natural language processing-like tasks.

ChatGPT was invented for language-like tasks summarizing a piece of text or generating a good response to a question that you ask. Fundamental models that it uses are, without going into too many technical details, things called transformers. Whereas self-driving cars, if you look at them, they use AI in various forms. I would like to say that to the common public, AI is one monolithic entity, which is a smart brain that lives inside the computer.

The word AI is a product of a number of different methods and algorithms that have been evolving throughout the history of Computer Science. Part of it has to do with symbolic reasoning and whether can we use a computer to prove theorems or do complex math. AI has evolved in various ways. ChatGPT is one of the products that we see that is in the public imagination. It has taken a lot of people’s fancies because it has made things accessible to the common people. Whereas self-driving cars use very different kinds of AI.

If you think of a self-driving car, an unmanned aerial vehicle, or a medical device, you can think of the software underlies these applications as software that perceives the environment. If I’m a self-driving car, what is in the environment around me? Where are the people? Where are the cars around me? Can I predict how the things that can move in my environment move? Based on that, how can I make decisions that help me reach my goal but, on the way, hopefully, don’t hit pedestrians or cars? That’s the motivation for the perception systems in self-driving cars.

Other kinds of AI focus on decision-making. Given all the information about how the environment is, what decisions to take to achieve my objective and achieve it safely. Those are other kinds of AI systems. These are different AI-based modules that comprise these safety-critical systems but the main difference between ChatGPT and self-driving cars is the impact they have on safety criticality.

If you look at ChatGPT and it makes an error, the worst thing you are going to face is you send out a cover letter that looks like nonsense or you have a piece of code that has some bugs and maybe you have to debug it. You generate an answer to a question that contains some fake references that ChatGPT hallucinated.

These are the worst errors that can happen but look at some of the incredibly damaging things that self-driving cars have done in history. All of these three are quite grim and morbid but these are my favorite examples. That’s who I am. I’m here to poke holes in AI. Look at one of the first accidents that a Tesla vehicle was in Florida where it confused the side of a semi-truck.

The semi-truck was painted white. Its camera-based system confused the white side of the truck with the sky. The car couldn’t decide whether it was a truck or the sky. It decided it was the sky and plowed into the truck and unfortunately ended up killing the driver who at the time was watching a Harry Potter film in the backseat. Why did this happen? The AI couldn’t distinguish between the white side of a truck and the sky.

My second example is even grimmer. This happened in Tempe, which led to a cycle of events that led to Uber ATG stopping their experiments with self-driving cars. The car got confused with a pedestrian who was walking on their bicycle at night and they couldn’t decide whether it was a pedestrian, a bicycle, or a car. By the time it made that decision and predicted that this was something it was going to collide with, it was too late. It ended up hitting the pedestrian and killing them. It was extremely grim.

If you have been following the news, there have been a lot of reports about cruise vehicles getting in the way of emergency response vehicles in San Francisco and even in Austin, Texas. They blocked traffic because the vehicles weren’t told that if there was an emergency vehicle behind they needed to get out of the way. This is not something that the AI was programmed to learn. In this setting, they are very different. It’s apples versus oranges.

There’s a yes and a no. In some way, they are comparable because in both these applications, what’s at the heart or core of them is what are known as neural networks. For those of you who haven’t heard of neural networks before, you can think of them as mathematical structures that we represent in code that somehow are similar to the neural structures within our brains.

What's at the heart or core of AI is what are known is neural networks. Click To Tweet

These neural networks are the powerhouse of all of the AI that is coming out and fantastic advances in both how we train these neural networks with data and what hardware platforms are used for these neural networks that have pushed this AI revolution. Unless Nvidia, Arm, and AMD had come up with fantastic hardware platforms for the AI to take off, the AI revolution would not have happened.

AI has gone through several winters where funding AI and interest in AI dried up but the new AI has happened because of these hardware platforms. Both ChatGPT and self-driving vehicles at their heart have these neural networks. Unless we mathematically understand how these neural networks work and as Ramsay’s work has been focusing on adding guardrails to what these neural networks can do, we can never have peace of mind when we sit in a self-driving car for example. It might decide that instead of taking you to your destination, it takes you to the mall because somebody programmed it to take you to a mall where you can shop more and earn revenue.

It’s great hearing you guys speak because you are all in the industry a lot earlier than November of 2022. It’s been going on for a very long time but I consider November ‘22 the same as 2004 for the internet. Why? The masses got high speed. Before that, it had uses but to get to the masses, high speed was necessary.

In this case, ChatGPT came alive. We can all touch it and a lot of us are. It’s starting to change. It feels like it’s brand-new. It’s not. When you have got seasoned professionals like you guys in it, you need to be heard because you didn’t start yesterday. That’s important. The one thing about self-driving cars I have always wondered about is what happens when there are two bad choices. Hit this person or hit that person.

This is being studied. A show of hands got in the audience for the trolley problem. The trolley problem is the parental decision about how we value human life. You see a runaway trolley going down the tracks and the lever that would control which track to send it to. You can get to it in time. You see that if the trolley goes down the first pathway, it will kill one person but they are very old. If you switch to the second track, it kills three people but they are the opposite.

You are forced to do where some human life gets ended but you get to decide through action or inaction which human life to take. This problem is hard enough when you try to work it out with humans but you have to automate the problem. Automated trolleyology is the study of how we are supposed to encode human value systems into a decision-making system that we have to then legally and insurance-wise justify the righteous end of life. That is an open-ended question. That is not a consensus answer on this yet.

You are potentially programming it into something that’s reprogramming onward. These are dilemmas that exist that forget about the potential harm to people. It’s a fascinating study and we are all in it so it’s fantastic. I want to go to you, Chris. For companies looking at AI with a real focus on efficiency and innovation, how do they navigate the ambiguity and still push forward?

In Artium, we have the luxury and the privilege of working with startups that are defining and building their companies from scratch. They are all talking about AI and how to embed it within their products before thinking about it up to enterprises that are looking at innovation within their product stack and thinking about ways that they want to disrupt their industry and whether or not AI is a tool or capability they want to be building into their products.

We have talked already about risk. We have CFOs. That’s what CFOs get essentially hired to do to manage risk. We talk a lot about risk as well but when we talk about risk with our clients, we are talking about it through the risk and the reward side. There’s always going to be that balance between saying that something is too risky or there’s only risk of all so we are not going to do anything. We need to continue to push forward and take some action to advance the field and the capabilities of things like generative AI.

When we are talking to clients, we are talking about that instance of getting started and looking at it through the potential positive use cases of AI. What can it bring to a company, your customers, or society? If we are very intentional about what those use cases are and the value they can bring, even without implementing the technology, we can start to look at it and say, “Is the risk worth this potential upside?”

At Artium, we are very focused on helping our clients get started to do something. As long as they are doing that intentionally and thinking through about the risk and the reward, then we do believe that there’s going to be a point where people start to say, “We need to do something. We don’t necessarily want to just jump in head first because there are a lot of unknowns both on the political side and the technology side.” Getting started is what we think propels this technology in one direction or another.

We also look at AI and Ramsay, this goes back to what you were talking about around knowledge workers. CFOs are out there looking at cost and cost efficiencies but we look at AI through the lens of what superpowers is AI giving to the various people that it touches. We look at it through the lens of a software engineer. What superpowers do our software engineers have that they might not have had access to before November of 2022?

CFOs are out there looking at cost and cost efficiencies, but we look at AI through the lens of what superpowers is AI giving to the various people that it touches. Click To Tweet

When we propose that to clients, we are not looking at it through how you take some costs out of the business. It’s how you take your workforce and allow them to provide even more value than they had capabilities for in the past. For us, it’s to balance risk and reward but if you are not, at least having those conversations about what you think the upside could be, then nobody is ever going to get started. We think that’s the wrong one.

With your teams, expertise, and what you do, it’s fascinating to me because some of your clients are startups. Maybe they come to you and say, “We need exactly this. We want the AI difference. What is it?” You have also got major corporations that are coming to you. Can you speak briefly but how do you address that? There are so many parallel technologies available for you to go to.

It goes back to the idea of what’s the use case and what are you trying to achieve at the end of the day. In startups, there’s a different risk profile. It’s usually 1 or 2 founders that are making those decisions. They get to decide what’s right for them at that time. They are out there hustling, trying to create a business. Within the enterprise, there are a lot of different stakeholders.

There’s never going to be consensus at those levels. It’s about having those deliberate conversations, finding the people that feel very strongly in one way or the other, getting them in a room, and talking about it. That’s the key. Everyone needs to be part of that conversation and be aligned in a direction forward.


EOA Launch Party | Edge Of AI
Edge Of AI: Everyone needs to be part of that conversation and everyone needs to be aligned in a direction forward.

 

 It makes a lot of sense but it’s ever-growing because one of the podcasts we are working on is a company that serves retail. They have AI uses 3 or 4 of them that come to mind that are different from one another but they all combined to build their business. That’s what you have got to contend with. Your business has to be snowballing with the more knowledge people get, the more they ask for. The more that’s available. It’s pretty fantastic.

We touched on maybe three brief answers from you. We touched on the regulators and some of the laws that are coming to be Ramsay mentioned and all but then you have got the speed in which all this is working. We have got three responsible people here with very responsible companies. We know there are companies of all sizes that are responsible and companies of all sizes that are not. With the pace of regulators and the industry, any comments you have got around that? Ramsay, you can start.

I’m pleasantly surprised consistently by how fast the United States government is moving around this to not just set up regulatory barriers like the European Union has with the EU AI Act. They follow the path of the United Kingdom and set up regulations that are pro-innovation and pro-industry while still keeping a safeguard in civil society.

The fact that the US has spun on a dime to take this as seriously as they have through nimble working groups between the intelligence community, the Department of Defense, and the US government writ large is consistently surprising me. Not that I want to see sluggishness but when you are so used to something like making fun of the DMV for taking so long and suddenly the government has a hypothesis on thinking machines, you are a little pleasantly surprised by this whole matter.

Jyo, do you have any comments on that?

That’s a great comment. I’m glad to hear about all these things about AI. Autonomous driving is one of those areas where more regulations are needed. One of the challenges in generally regulating AI is how do you regulate AI, who does the regulation, and how does the regulation get enforced? For example, if we take a look at avionics companies, all of you who have flown commercial know that not a single jet takes off without several years of review by the FDA. Not a single medical device gets used or sold to the general public without several years of review by the FDA. Who does that for self-driving vehicles? That’s not clear. How do we do that? It’s an interesting question.

That’s interesting the two answers so far. That’s the dilemma in both of your comments but, Chris, you are not getting off the hook.

From my perspective, we employ incredible craftspeople who are software developers and designers. I believe that people are fundamentally good. When you give them great tools, they are going to do the right thing using those tools. The practices that we deploy as software engineers in Artium in the way that we build software are around things like doing test-driven development where we can build in some of those protections on what we want the software to do before we even start writing code.

If you start thinking about intentionally, how do I make sure that anything that I’m building here, whether it’s using AI or not, the test cases are based on a good outcome that is not biased or the right outcome without that risk is great? We pair programs so it’s hard to get two engineers who are willing to work together and do something bad at any one time. By pairing together, we are essentially checking and balancing ourselves and making sure that we are doing things the right way. We strongly believe that if you develop software the right way, then the outcomes are going to be the good thing that we are looking for.

Your three answers define why this is an amazing panel here because they were all different from one another and they come from so much experience. It was super thought-provoking. Thank you for all that. We will start with you, Chris. Where do people find you if they want to look at Artium and see what you are doing or track you? How do they do that?

Thanks for that. It’s www.ThisIsArtium.com. Find us on our website and then we also have some great social out there in our podcast crafted, which is about the great products that are out there and the people that create them. Thanks for having us.

Please look up the Center for Autonomy and AI at USC. It’s super simple to remember AI.USC.edu. This is a center that I co-direct with my colleagues in ECE at USC. You can find me there.

It’s pretty easy to find. We are UseMissionControl.com. I’m on LinkedIn and come say hi when we are all done here.

We also have the AI Podcast Lab by Podetize, which is here to advance your podcast and business with the help of AI serving as your very own private study and research facility.

Joining me on stage are these two amazing leaders in AI technology. Les Borsai, was a highly successful entrepreneur and consultant specializing in the cryptocurrency, blockchain, and entertainment industry with over a decade of experience managing recording artists and launching successful startups in digital music and cryptocurrency, including Wynonna Judd among others, including Jason Mraz as well.

Les is the Cofounder of Wave Financial, a registered investment advisor with $1.5 billion under management that bridges the gap between traditional asset management and cutting-edge digital currencies. In addition to all of this, he’s been in the world of IP licensing for many decades. His visionary approach and expertise in the realm of digital assets continue to shape and redefine the future of technology and entertainment. What a bio.

We have Rachel Joy Victor, who’s an independent designer, strategist, and world builder exploring emergent technologies including XR, AI, and Web3 for cohesive narrative experiences at the intersection of systems and humans. She’s the Cofounder of FBRC.ai connecting studios and startups for content innovation. Her expertise is in computational neuroscience and spatial economics to inform data-driven immersive designs. She worked for some big-name folks like Disney, HBO, Technicolor, Vans, Ford, Nike, and many more. She also leads executive education sessions for Activision, Unilever Prestige, Warner Brothers, Sony, Crocs, and Red Bull. The list goes on and on.

She’s spoken to a lot of different audiences including the NAB Show and Games for Change around design for emerging tech. We are still glad to have her with us as well. I am Josh Kriger. I will be your captain for this exhilarating voyage. Just like you, I have this insatiable curiosity that has led me on a cross-industry entrepreneurial journey building transformative companies.

As Cofounder of Edge of Company, I have hosted over 250 conversations like this with emerging tech leaders. Artificial intelligence has been part of my toolkit for a long time. I was a Cofounder of one of the largest food tech companies in the United States, Territory Foods, which led me to come to LA. I architected the menu planning algorithm based on consumer taste. Before all of this, my roots in consulting included supporting geospatial visualization services across 28 federal agencies and a predictive homeless analytics initiative to curb veteran homelessness. We will navigate uncharted territories in AI. Buckle up and get ready to embark on an amazing adventure. Let’s set sail.

I don’t think there’s a more relevant mainstream media conversation than AI in Hollywood and that’s why we wanted to have this conversation with you both. We all know at this point that artificial intelligence offers immense possibilities to enhance creativity, and engagement, and create new types of fan experiences for audiences worldwide.

It’s an uncharted territory and it has its challenges too. We are trying to figure out how to get through those. It makes sense to start the conversation with how is AI revolutionizing the creative process in the entertainment industry at the core. What are some of the tools that are being used to produce captivating content? Rachel, why don’t we start with you?

It helps to understand that AI has been utilized in the film industry and the media for a while. There are different types of AI. What we have seen from November 2022 onwards on this hype cycle-driven interest in AI is generative AI, which is one category of AI and there are a lot more different types of AI. There’s machine learning that’s been built into tools that have introduced efficiencies for a long time.

There are some interesting things being brought about with this next generation of AI tooling. Generative AI is able to output content in a unique way because it’s built on large data sets that don’t need to be labeled so it’s able to spit out a large amount of content. What we are seeing out of that is innovations across three sectors in relation to media and entertainment.

One is introducing efficiencies within existing workflows. That’s both within traditional production and within virtual production as a part of the film production process. These things are like you don’t have to animate every frame and you can animate keyframes. Animation can be created between those keyframes. Those efficiencies have already been introduced for a while.

We are seeing a new category of innovation around script-to-screen or script-to-concept models. That’s where we are seeing generative images that are created that allow you to put in a prompt, get a visual output at the level of the frame, and create without necessarily needing the typical infrastructure that’s required around producing content. There are advantages and disadvantages to that as a model and their trade-offs between what it enables. I’m sure we will go into a little bit more about that but that’s a new category of creation that’s emerged with generative AI.

The third category of how AI facilitates content creation is AI and procedural creation, which has been happening for a while. It’s the opposite of generative AI in some ways. If we think of generative AI as bottom-up trained models, procedural AI is you create the architecture and the rules for how systems intersect and you allow those systems to intersect.

If you saw simulations like the South Park episode, that’s more based on procedural AI. We are going to see a lot of advancements in the hybrid and the intersection of all of these things working together. These are categories of things that are enabled with different AI tooling but they are coming together in interesting hybrid ways.

I appreciate the breakdown there because I feel like the media meshes all this stuff together. It’s the nuances where the real opportunities lie. Les, you have been an innovator and disruptor on all sides of entertainment over the last few decades. What is this moment in time compared to past stages of rapid innovation? Is this a different moment for some reason or another?

I look to hope it is. When I think back to the experiences I had with innovative technologies coming into studio systems or studios where I worked, our natural defense mechanism was to litigate versus embrace innovation. We used IP as a weapon. I say we because I was there when it was happening and it always pisses me off that we couldn’t embrace these young bright minds that were innovating.

One of the things that led me to cryptocurrency and connected me to it was the idea of collaboration. You didn’t have to have a set agenda as a company. You could be a finance company and produce films. I love this idea of smashing things together. One of the key markers in my life was an AOL bot, Time Warner. It was prime for disruption. It didn’t end well but the fact that it happened was always one of those big motivating factors for me.

When I look at AI and how it’s going to disrupt, it’s things like we have seen a ton of music content coming that’s being created. I wrote an article that ended up in spin about this. It was about musicians being immortal through AI. The fact of the matter is touring musicians that are on the other side of their career can’t do what they used to do. Maybe they can’t even write in some cases the way they used to write. They can almost write with themselves by putting prompts into artificial intelligence and co-writing with themselves.

The cool thing about the whole space is the imagination you can have and how you can disrupt. I have been working on an AI project while project with AI, which I will get into a little bit. It’s that idea that not everything has to be exactly as it’s written. This premise is that everything has to be hit-driven and Hollywood sucks because it kills the creator economy. There should be a builder economy that allows creators to make more and it doesn’t have to be a hit every time.

Both of you have also dabbled in both AI and Web3 and I’m sure other innovative technologies as well. What I’m hearing from you, Les is that there’s also this cross-entertainment mashup or convergence happening. At the same time, there’s this Web3 AI convergence where you can start from A and get to point C multiple different paths that weren’t possible where you go from music to gaming or you go from gaming to a show that’s popular. I binge-watched on Netflix.

It’s all about storytelling and now that you have that anchor of a story, you can be creative outside your comfort zone because you have these tools that can enable you to accelerate innovation where you are not learning a new craft. Is that what’s gotten you excited about gaming in that intersection with Hollywood or is it something else?

That’s partially it. There is this general idea that it has to be Web2 or Web3. The truth is you can take great elements from both. Web2 has an audience and speed that you can tap into. Web3 has decentralization and monetization that comes in a different way. The points that are exciting and you mentioned world builder, that’s the point.

You can build worlds and incorporate a lot of those different concepts into the world. The fundamental problem is with the way the system works, it’s not about monetization. The several hits pay for the many failures. If you can create enough points of economic value, you can take these different technologies, apply them together, and create something where you create a bigger audience, bigger adoption versus limited adoption.

Rachel, how do you sell this to your clients when you are thinking about taking interactive narratives and characters and applying them across different genres?

It’s a constant evolution of where the industry is at and what they are willing to accept. To your point, Les, of supporting this type of cross-format narrative, there’s been a desire for it in different ways. You had a lot of different names. They have been called transmedia narratives before. The term comes in and out of popularity.

There’s a desire both on the consumer side to have a persistent and continuous experience. On the brand side, think about the long tail and building longer-term value around the IP that they are creating but it runs up against a lot of real-life issues around how organizations are structured, how production cycles are structured, and how budgets are structured.

For example, if I’m working with a brand, they might like the idea of building a brand experience ecosystem that connects their product, and that connects the content and marketing that they are creating that connects the metaverse world they want to build. At the end of the day, their budgets are quarterly. Their KPIs aren’t necessarily rewarding them for building that type of engagement.

Some of it comes from a longer-term conversation of how are we reshaping metrics. How are we showing that there’s value in building community over time thinking about the LTV around engagement across a content ecosystem and making sure that we are updating the KPIs and the metrics we are looking at to reflect those types of things? That’s one category of pushing the type of innovation.

The other is figuring out ways to backdoor a more cohesive narrative, which is you only have funding to make this activation or build this one thing for this quarter and maybe a little bit more funding. Let’s start thinking about what your long-term world is and the type of narrative that you want to tell for your brand and we will build that activation. We will make it compelling. We will do all the things that make it a sell but we will also put in a piece of that long-term strategy. We will put in a piece of that world that you want to build and also build out enough strategy for what the next piece is going to be.


EOA Launch Party | Edge Of AI
Edge Of AI: Let’s start thinking about what your long-term world is and the type of narrative that you want to tell for your brand, and we will build that activation.

 

 It’s not necessarily doing it the ideal way, which is some amount of top-down strategy of like, “This is the world we want to build. This is where we want to see our audiences.” Sometimes it’s not the structure that brands are set up to allow in terms of that long-term thinking but it’s working within those constraints to start building something longer-term that’s more cohesive over time.

Speaking of building, Les, how are you going to be building this game? What are you going to be doing with AI? How is it different from maybe how you would approach this type of problem set in the past or the types of companies that you would have invested in a few years ago?

We have already been doing it for years so I love everything you have to say. Fundamentally, I couldn’t be more opposed to it in some ways. I don’t think that way anymore and I used to. I don’t give a damn about brands. I don’t care about any of it. No offense. That’s just me as I get older.

A very polite argument we are having up here. Contrast is why we are having conversations. Our audience should understand the different perspectives here because where this is all going is still a story being told.

I typically build things when I’m angry. I figure out ways to deal with that anger. Being a licensing guy, I have done licensing for companies like Zynga, GIPHY, and all these companies. The first thing I wanted to do when I looked at this Web 2.5 model was to find content that I loved. There were games that impacted me when I was young, things like Neuromancer and all those cyberpunk dystopian games. I thought, “I want to go license a bunch of source code.” That was the first part.

The second part was I wanted to take social media authentic influencers, not brand influencers. I didn’t want to take someone like Kim Kardashian to promote a game when she didn’t know a thing about games. I wanted to take authentic gamers that stream and build them into games and almost create a true digital twin model in putting them into the games we were building but also stepping out of our foundation and putting them into other games as the games were being developed because interoperability doesn’t exist yet.

It’s incredibly hard to do it across chains. You can’t. You can’t have people build into the games. The other component was in this world to put AI into these characters. That solved a couple of things. If you are a guy who went through a divorce and spent a lot of time looking at technology at night because you have nothing else going on, you can learn a lot about these apps that are out there. I looked at all of it and I thought, “Wouldn’t it be an incredible environment if gamers could have deeper interactions with the characters they are playing with?” We started building in VR and the web. It goes from there.

I can’t help but think about a formidable moment and the journey of Edge of NFT when Yat Siu the Chairman of Animoca Brands came on our show. We were about six months and we were like, “We have these contests we do. Do you want to do a contest?” Normally, someone will give us NFT or something like, “We will give it away.”

He’s like, “No. Why don’t we create an Edge of NFT racing car in our new racing game and bling out 1,000 of these with your logo? You can give away your race cars to your audience.” It went viral. It started our relationship with Animoca Brands who’s one of our lead investors but it made me rethink about co-creation. I have to say it was cool to have an Edge of NFT race car in a game six months into creating a new company. It was one of those moments that catapulted our brand.

Expand on that further. If you take Animoca, we start to look at the influence of a culture that’s global. These are all lessons from cryptocurrency. When we look at secondary marketplaces that were for NFTs, they can distribute anything. If you start to integrate into games called Animoca, Mythical, or YGG in the Philippines, you have a bigger distribution system by building those products. If you can make them more robust by putting AI into them and creating more interpersonal relationships, then you have built more than just a fundamental world. You have connected all the pieces. That’s the same experience.

There was some news the writers were finally talking to Hollywood and trying to figure things out. I don’t know where all this is going but let’s play future tellers for a moment. Rachel, do you think the writers and actors strike is going to fall out? Do you see a new industry category being created? I mentioned likeness and essence. Is that essential to the new entertainment industry coexisting with AI?

A lot of this comes down to the specificity of what we are talking about. Sometimes that nuance gets lost in the conversation around AI on all sides. Part of it is related to some of what we have been talking about. What is the format of what we are consuming? A lot of the conversation is centering around film and television as being the format output of choice.

When we bring AI into the equation, the format isn’t necessarily something as linear and passive as film and television. It’s something more responsive. It’s content formats that games like virtual worlds. The racing car is embedded with narrative affordances as a part of it that says, “I specifically have this racing car. I got it from my involvement with Edge of NFT so it means certain things. It gives me a certain boost when I’m within the game.”

My car was pretty slow, unfortunately. It had a bad turning radius.

It’s the future Edge of NFT or Edge of AI vehicle. All of those things aren’t necessarily formats as we know them. They are an emerging category that enables participation for the consumer within the story worlds that they care about. I’m going to say as an aside, story worlds are brands. Everything is brand. IP is a brand. It depends on how you categorize a brand in terms of how you want to engage with it.

The brand is something that brings people together. It’s something that people identify with shoe companies and sometimes people identify with Star Wars. As a part of that, there’s this evolving picture of, where creators are part of that story. When we are talking about the Writers Guild and SAG, it’s a question of, “How are we making sure that writers write themselves into the story of these emerging formats? How are we making sure that writing isn’t considered the words that you put on a page in the screenplay?”

The brand is something that brings people together. It's something that people identify with. Click To Tweet

If you are creating the backstory of an AI character, that’s writing as well. If you are thinking through the logic of the systems of a world and how they relate to each other, that’s being creative or that’s being a writer in some way as well. It’s leaning into the nuance in some ways of what are these new roles that are emerging around writing for instance.

In the context of SAG, for instance, some of it is leaning into the nuance of what capture looks like. If you are an actor and you act in a film, you don’t own the film at the end of the day but you still own your likeness. You can go and sell your likeness to take on other roles as well. Some of what we are seeing with the SAG conversation is this conflation of performance capture and body capture and saying that those are the same thing when they are not.

As we move towards making sure that there’s equity in how creators are a part of the process in terms of collaborating with AI, it’s making sure that at the end of the day, name, image, and likeness are owned by the people that are inputting into the system. They have the ability to own and get value out of the work that they create by collaborating with AI.

I hope SAG has someone like yourself on their advisory committee because these are some important points. Les, what are your thoughts here?

We have been doing motion capture for some of the influencers. It’s super costly to do it right. I don’t want to mess with anyone’s likeness. If they leave, they can have it. I don’t care what it costs. I want to participate in the monetization that I create. When we take a look at the strike, it’s about a creator’s ability to earn a living at the end of the day.

When did the studios and the distribution mechanisms become so important that we can’t exist without them? That’s where the disruption needs to happen. Does it need to start with what economics looks like? We have been on this plan for a long time with studios and record companies. It needs to change and it will be easier because then anyone can create. The natural selection process to even be able to do it is insane to break in what you have to do to get it done. You should be able to have a platform that allows you to create and show the world what you are creating and get paid for it.

You should be able to have a platform that allows you to create and show the world what you are creating and get paid for it. Click To Tweet

On the flip side, where do we run into a brick wall in terms of creativity and originality where AI pushes the limits and at the end of it all is genericness potentially? Is that a potential challenge here? Whether or not we are at the point where creativity is trumped by the power of AI is debatable. I have talked to actors, writers, and artists who think AI is already winning. Do we have a concern with creativity and originality? You are shaking your head, Rachel.

I don’t think so. First of all, if you play with AI tools, there are a lot of issues with where they are at. There are also certain caps to what they can do as they exist. If you play with stable diffusion and image generation models, a lot of those are transformer-based models so you are not going to be able to have some stylistic or object consistency that you want.

There’s going to be a limit to what you can achieve with that. People assume that everything is about computing power. If we keep throwing computer power at it, it’s going to get better and better. That’s not exactly how it works. First of all, there’s a limit to the tools as they are structured. Secondly, there’s still going to be value in terms of what we bring to the puzzle in terms of collaboration with AI.

Humans and AI are good at different things. If we think about where intelligence comes from and let’s say neurosciences is coming a little bit from that perspective, it’s not localized to our brain. It’s not just about the connections within the networks of our brains. It’s our embodied intelligence. It’s the fact that we have sensory systems that feed into how our brain works.

All of those things, like the schemas that are a part of our brain and the society in which we grow up all feed into what we consider human intelligence. There’s no way to replicate that structure 1 to 1 with AI. They are going to be different at the end of the day, which isn’t a bad thing. There are things that AI can do much better and will be able to do much better than we can. It’s about designing complementary ways for AI to augment us.

I hear you acknowledging it as you agree with each other on this one.

I’m looking forward to super intelligence taking it past our capacity and seeing what it creates or where we are from that. Do we get to AGI? You could probably answer that to get to super intelligence. The word you used several times is accurate. It’s a tool that can enhance the creativity of an individual depending upon the prompts that I put in and what philosophies I believe in scraping what I have said in social media to create a true representation of myself if we are using it for a digital twin model. It’s no different in some ways if I’m co-creating and using it as a tool for what I’m going to be creating.

Let’s flip the script and talk about the consumer consumption side of the house. In that sense of media consumption, there are new patterns shaping how to influence what people watch and how they consume. Where do you see this side of it heading?

AI has been incorporated into that for a long time. If you look at a company like Netflix for instance, Spotify, or TikTok, there are algorithms that are feeding you content. They know you well and your preferences well. Where we are going to see more of an evolution there is around better context of what your spatial context and emotional context are.

You got an email in the inbox that may have caused your blood pressure to spike. Spotify is recommending calming music. What does that look like? It’s that interplay between understanding where we are at as humans on a deeper level but then that’s a lot of information and data to know about a person. How are we making sure that there’s a privacy piece to that? How are we making sure that there’s ownership for the individual of like, “I do want Spotify to know it because I want them to feed me relevant information but where else does that information go? How am I aware of the ownership of data?”

I always have a little bit of FOMO like, “What is Netflix or Spotify not telling me what I want to know to expand my horizons?”

It’s interesting because sometimes depending on settings, you can go in and see that this is the profile they have created of you and you get a better sense of how you have on a data picture that has been seen by this technology. That’s always interesting because it picks a different data point. It’s not going to be accurate.

I’d like to think that I’m always evolving as a human and my taste is changing. I want to get outside my comfort zone. Maybe I don’t.

I’m curious about something you could probably answer. What we don’t hear too much about is we made the shift to Microsoft and ChatGPT. How it’s going to impact search and lead generation the way people have been paid? I’m curious about your opinions on some of that.

It comes back to formats in some way. It used to be you search and then you are curating through links. You search and there’s the synthesis of content. We are going to see more specialization of ChatGPT. Even though it tries to be universal office-based use, it still has a POV at the end of the day based on the constraints that are built into its systems. You might opt into different types of constraints and thus choose different types of engines. That’s one way where we will see a difference in that market development.

The other is if you are using AI to create an email and make sure you have enough content, you put in the bullet points and it spits out an email for you. On the other end, someone is reading that email and is like, “I don’t want to read all this.” It’s using AI to synthesize it. What does that mean? How does that change the content and the functional relationships between people in some ways especially when we are not necessarily talking about narrative but we are talking about efficiency? There might be shorthand ways of the engine to engine.

I appreciate the question too and it makes me want to ask you, Les, a little bit more about how you think AI is going to shape marketing distribution for your gaming company.

Before I answer that, I was thinking about something else that I love and you can elaborate on this. With the way the large language format works from what I understand, you have this destination, which is Microsoft after OpenAI. What I love about everything that’s hitting the market is I don’t believe record companies are going to be able to stop it. It’s going to land on Microsoft to figure out what a takedown is and what it isn’t.

At a certain point, there’s going to be so much content. If we look at the bot problem that is happening with Instagram, Twitter, and all these places, imagine what’s going to happen with AI when the speed of content hits everything. How do you take it down? How do you manage IP at that point? That’s going to be an interesting dynamic.

I don’t think I know the answers to that yet. That’s a huge space that still needs work. We are talking about spaces where there are still a lot of gaps and content regulation because context is so important for content regulation. That’s something that AI struggles with. Also, the privacy piece that we talked about, regulation around AI safety, and AI attribution. Those are all areas that we are catching up or we are trying to catch up but we are quite far behind in figuring out solutions.


EOA Launch Party | Edge Of AI
Edge Of AI: Context is so important for content regulation.

 

 It’s wrong. I’m rooting for AI.

What does it mean to root against AI?

Rooting against AI would be rooting for Microsoft.

This has been an exciting conversation. I have learned a lot. I want to give our audience a chance to ask 1 or 2 questions. If anyone has any for the sake of expediency. Jump right to your question.

You guys talk about creativity, AI, and storytelling. Is there anybody out there that you can co-sign? These guys are doing it right. These are artists who are using AI together in interesting ways that are standing out in the way that people did with NFTs or big actors doing film. Do you see anybody that’s standing out that we can tune into and see this is a blueprint that people are doing?

The question is are there any artists who are being true pioneers at using AI in the right ways?

I love searching for music that’s been created. That has nothing to do with the original artist. For instance, there’s an Oasis record. It was called AISIS or something. They took the timeframe when the band was starting and they wrote the next record. It was interesting to listen to or there were these variations of The Beatles and Brian Wilson performing a Beach Boy song. That stuff is interesting. It’s stuff we would have never heard before, even if it isn’t exactly right yet. There’s a lot of stuff happening in music.

There are a lot of creators. What’s most exciting to me is how there’s the democratization of tools. Many people, every time there’s a new tool out, are on it. They are not just playing with the constraints of the tool. They are linking these tools to each other and that’s what’s exciting to see.

It seems to me that AI is the elephant in the room of these two strikes crippling Hollywood at this point. Where do you see the equilibrium point of having AI enable and augment creators but taking away jobs from these two unions?

I’m going to try to paraphrase the question. There’s this clear tension between using AI to transform and innovate the entertainment industry, the potential loss of jobs, and disincentivizing talent and creativity. Where is all this going to go?

From where I sit, you are not going to like the answer. Creatives should be creative and there should be outlets for creatives. If the existing structure doesn’t work, then screw it. Figure out another place to be creative. Even if you look at this younger generation as we dealt with things in digital currencies, they were incredibly innovative with lots of things.

Creatives should be creative and there should be outlets for creatives. Click To Tweet

The idea with the GameStop short against these huge hedge funds that got crippled by kids that believed in something, I want creators and writers to look at the next generation of kids that are building. To go back to these archaic models sound stages are going to become museums. Technology, especially digital technology, gives you so much you can do that you have never been able to do.

You look at the studio model for instance. You have an affiliate model for distribution where you are dealing with someone in a foreign market. You have to motivate them to sell your product. It’s like, “Wouldn’t it be easier if it was all connected?” That’s what this younger generation is going to do. They are going to give different opportunities with higher margins that go to the creators and the world will be back in the place it should be in. I don’t care about the strike.

Rachel, any closing thoughts here?

I would argue that AI isn’t the elephant in the room of the strikes. The issue is fundamentally about the economics of content. What is the content? What are people willing to pay for both to create and consume? That’s the question. AI is the flashpoint on which that’s turning. If we want to solve it, it comes back to a lot of what you are saying, “We need to innovate on the formats and models behind what people are consuming.” It’s been changing for so long and we haven’t kept up to date with that.

One last point there, if you take a look at the Silicon Valley or venture-backed startups that are innovative, their skin is in the game. Everyone who’s developing those companies wants to win. We take a look at the entertainment business and none of them are incentivized. They turn up to their job and do their job. The innovation gets smacked out of the room and that’s a problem. It’s more of the same. When we start to incentivize the people who build these companies, they will figure out how to make money while paying creators. It’s been wrong for so long.

This has been an incredible conversation. I can tell that because it’s getting late here in Venice and the audience is glued to their seats engaged. I learned a lot. I appreciate your time so much for being part of this launch party for the show. I would love to let people know here and at home where people can go to learn more about you and the projects you are working on. Rachel?

I’m on RachelJoyVictor.com. Also check out FBRC.ai. We are working a lot in the space of connecting startups who are solving specific issues and gaps in the AI ecosystem to each other to resources and corporate sponsors that can help support them and bring them to the next stage.

My name is Les@WaveGP.com or in social media. I usually answer.

It’s time for another safe landing at the outer edge of the AI universe. On behalf of our panelists and the entire crew, I’d like to thank you for choosing this voyage with us. We wish you a safe and enjoyable continuation of your journey. When you come back aboard, make sure to bring a friend. Our starship is always ready for more adventures.

Head over to Spotify or iTunes. Rate us and share your thoughts. Your support and feedback mean the world to us. Don’t forget to visit EdgeofAI.co to learn more. Connect with us on major social platforms by searching for @Edgeof_AI. Join the exciting conversations happening online. Before we sign off, mark your calendars for our next voyage. We will continue to unravel the mysteries and advancements of AI. Until then. Bye-bye.

 

Important Links

Share it :