top of page

Beyond the Bot Ep. 7: Generative AI Law & Ethical Implications with Marissa Porto Pt.1

Tony, Steven, and Marissa Porto for Beyond the Bot Episode 7
Steven, Tony, and Marissa for Beyond the Bot Episode 7

In this thought-provoking episode of Beyond the Bot, hosts Tony and Steven sit down with Marissa, the Knight Chair for Local News and Sustainability at UNC Chapel Hill, to explore the intersection of artificial intelligence, art, and copyright law. As generative AI tools become increasingly embedded in creative industries—from graphic design and journalism to music and video production—the conversation probes into pressing ethical questions and legal uncertainties.


Marissa brings a rich perspective from her background in local journalism and media sustainability, providing a nuanced take on generative AI law and the implications of AI-generated content. The trio discusses everything from the legitimacy of AI art to the copyrightability of AI-assisted creations, touching on global legal trends and the evolving responsibilities of businesses and creators alike. This episode is a must-listen for anyone navigating the blurred lines between human creativity and machine-generated innovation.


Transcript


Tony DeHart: Hello and welcome to another episode of Beyond the Bot, where we go beyond the headlines and explore the world of AI and robotics and what it means for you and your business. I'm Tony.


Steven King: And I'm Steven.


Tony: And we're here in the Blue Sky Lab and we're joined by Marissa, the Knight Chair for Local News and Sustainability at UNC Chapel Hill. Marissa, thank you so much for joining us.


Marissa Porto: Thank you for having me.


Tony: So before we jump into the topic here, can you tell us a little bit about who you are and what your relationship with the news and artificial intelligence is?


Marissa: Well, I've spent most of my career in newsrooms covering small communities around the country and leading newsrooms and then leading news businesses for companies in the United States. And here I've been for three years. I'm the Knight Chair in local news. I focus my time and attention on the intersection between journalism and sustainability innovation. And the last few years I've been studying AI and how it's changing the business model.


Tony: So AI is a huge topic in the realm of innovation and creating content, right? And, you know, we talk a lot internally about artificial intelligence as a driver of business value. But today we really want to focus on the creative applications of AI and what some of those might look like. So when we talk about AI art, what exactly are we talking about here?


Marissa: So we're talking about—it's a broad spectrum, right? It's everything from poetry to stories to videos to—


Steven: Music.


Marissa: Music. Great. Everything that is creative is AI art. And that is what we're looking at today and what we're using in our classroom to teach our students.


Tony: So when we look at generative AI, Steven, specifically on the business front, there are a lot of ways that we can use this, right? What are some of the applications that a business might be looking to accomplish with generative AI?


Steven: I think before I answer that question, I might want to say that there's an argument over: is generative or AI art really art? Is it the creative process? Does it make things? So how do we define art, essentially? But let's just assume we're going to call it art because it makes a visual image or it makes something that makes us think. And so there is business value to that. People can make a t-shirt, they can sell that t-shirt. And so now all of a sudden people are like, "Oh, I can make things really quickly." They don't have to have all that talent or skill that they needed before. And so now they're able to do things because they have an idea and they can use generative AI to generate that idea that they can then try to sell and make money with.


Tony: So Marissa, when we focus in on that application—if we are generating an image using artificial intelligence—we've kind of cut a creative person out of that equation in some ways. What are the ethical implications of that?


Marissa: Well, I mean, I think there are a lot of ethical implications of what we're doing with AI. Right? First, they're really twofold. The first is: what is AI using for you to be able to go in there and give it a prompt and have it spit something back at you? Is it copyrighted material? And is that copyrighted material being used with permission or not? In which case, they're undercutting the economic value of this content. Right? So that's the first issue.

Then the second real issue is that, as you're developing something using AI, at what point does it become something more than generated AI—something that really has artistic value, that has human interaction in it? What's the point there at which it becomes a creative endeavor?


Tony: And so if we go back to our t-shirt example, for instance, what really is that point where it becomes a new creation and not something that anybody can just go and print that image? At what point is it actually copyrightable?


Steven: Well, I think from my perspective, it's one of those things that maybe even the moment that it's generated—now the courts can argue over this—but the moment it was generated was based on a concept I had. So I had an idea for a t-shirt, I really did, and I was basically like, "Our robots suck." Okay, that was the concept we were going with. And I kind of came up—I wanted it comic style. I wanted it to have like the big "pow" kind of icon about it. I crafted this thing till I got to the exact colors I wanted, and then I got it and I thought I had it right. And then I used it, I made it, and now someone else has copied the idea. Do I own the copyright on that? I don't know. I like to think that I do. But ultimately, I could have sold the t-shirt—I didn't, right? But if I did sell the t-shirt, then all of a sudden I'd be losing money on that. So I think the moment that I created the prompt, I created something that didn't exist before. So therefore, I should be able to have the copyright on that. But people like to argue over that.


Marissa: Right. I mean, this is a global issue. It isn't just in the United States. This conversation is happening around the world. And the issue becomes: how much creative work was put into the prompt? So the courts are still diving into this, but the Copyright Office at the Library of Congress said in January that if there's significant human creative input into the content, then it is possible it could be copyrighted. So as an example, if I prompt ChatGPT by saying that—(and you could fill in any number of those tools)—but if I used a prompt and said, "Generate an image of a dog on a skateboard," right? That prompt is just a prompt. But then if I say, maybe I want the dog to be—I like Collies, so a Collie. And I want it to have five puppies, and I want it to have a green beret, and I did some back and forth about what that dog needed to look like and what color it was. Now you're starting to get into beyond the first prompt—you're starting to use a tool with human input and expression. And that is where, with the Copyright Office decision in January, they decided that could be copyrighted. Now, who makes the decision and at what point? That's the question right now.


Tony: Well Marissa, I want to hone back in on one thing that you said a moment ago, which is that this question is twofold—not just can the output be copyrighted, but is it being influenced by inputs that might have been copyrighted? So if we go back to our t-shirt example: if I say I want a picture of a dog on a skateboard in, say, Studio Ghibli style or in the style of Salvador Dalí, does that change the copyright implications? And does it change the ethical implications of using that art?


Marissa: Yes. So The New York Times and some other news organizations are now suing Microsoft for this very reason. They're saying Microsoft used that content that is copyrighted by The New York Times and they allowed their tools to be trained by it. And therefore, anything that's being output that has a New York Times style to it or feels similar to a story really was used without permission. So what you see now is, on one side, organizations like The New York Times suing for that. And then on the other side, some organizations, news organizations and media organizations, actually finding a way to contract their content, whatever their content is, and have the AI organization, the company, give them money for the use of that for training purposes. So those are sort of the economic and legal things that are happening in the world today.


Steven: Because it's really complicated. Because I say, I want to make this in Salvador Dalí style. Then I had to have looked at a Salvador Dalí painting to do that. Now, if I were the artist and I copy his style, the courts have said that an artist doesn't own that style. But in the case of the AI part, they had to study and take in that without permission. In most cases it's happening. And so therefore it's like you made a derivative of something you probably shouldn't have had access to in the first place. And that's where it really gets complicated as to what this thing is and kind of who has access and rights to it. So if I do it in The New York Times style, does The New York Times now get a few pennies every time I want to make something in that style? I think the courts are going to have to figure that out.


Marissa: Right. And there's a term called fair use. And fair use is a legal term. And it essentially says, if I'm taking some information and I'm transforming it in some way—so let's say I read something or see something and I decide to use it and transform it—this is outside of AI—I transform it into, let's say, a column. Right? I read something in The New York Times. I thought, oh, this is really interesting. I use some of the information, not word for word, but for a column, a writing that's either pro or con. That's a transformative use. Right? So that's called fair use of that content. And companies are arguing—the AI companies are arguing—well, letting us train our AI bots on content, that's a fair use. And so that's really what the courts are going to have to figure out right now.


Tony: Well, and notable to that example, it's attributed. Right? In that case, you're saying this is information that I got from a New York Times article. But that's not always the case with AI. So for example, Steven, from a business perspective, as a person who comes up with a lot of creative solutions, how would you feel if an AI chatbot was able to parse those solutions and serve them to people without your will or knowledge?


Steven: Yeah. I mean, it's like, you know, we come up with a solution. We share that with a client. The client then took that and built it on their own. That's really frustrating. Okay. The same thing is happening in AI every day. But it's a collective and you may not be aware of it. Right. And so as a business owner and as we're trying to figure out the future of this, I think business owners are going to have to decide how much do I share publicly? Is there going to be some way of me saying, no, this content is not available to AI bots, for example? Is this something that I want to have some way of protecting? We don't have a good way to do that, but I think it'll be up to the people. Maybe the University of North Carolina, Hussman School should come up with that, right? Maybe there has to be some way that we do protections of these and give people the choice to opt in and opt out. Those types of things.


Marissa: And I mean, I would say there are businesses already that are building their own AI models. Right. We just had someone from Bloomberg, a UNC grad, speaking to my media economics class. And one of the things she said is that Bloomberg has a closed system. So it puts in its own system Bloomberg content and only allows Bloomberg content because it already knows that Bloomberg content has been vetted. And so you can't get into the system from outside, but inside the company, you can get into it. So you see those sort of closed systems developing now.


Tony: Now Marissa and Steven, there are a lot of things that we can do to protect copyrighted materials moving forward. But a lot of media companies have kind of made this argument that the toothpaste is out of the tube, so to speak. There are already massive libraries of open source materials that have been used to train these models. And so is this even a relevant question, or is there a way to go back? Or, you know, where do we go from here, given that that's, you know, sort of the bell's already been rung, so to speak?


Marissa: I think that is a... that is a challenging question. So and you have to look at it with the vantage point of the United States, and then you have to look at it from a global perspective. So here in the United States, we sort of have a little bit of the Wild West feeling about regulating business. And it- it's continued, right, this administration is very much anti-regulation for business. And so you see some of those guardrails coming down, for different types of businesses. But you also have, you know, the in the EU, there's a very significant, there's very significant guardrails around the use of AI and how the ethical uses of AI and how it will be rolled out, and when it's rolled out. All of those things the EU has, has built into its, its laws. And here, that we could... we could be affected by that based on business. So that's a whole conversation that, that I was having, a few weeks ago with some folks from a German university who were visiting here. How do you change the law?


Marissa: Is it- can you put the genie back in the bottle? And, and what would cause the states to actually consider, different legislation. And it seems like the conversation was that if business had to go into another country where there's different legislation, then that might prompt the United States to think about what that legislation should be here.


Steven: Yeah. This is, this is really a business thing, right? Like, as a human, I can't unsee something that I've seen, but as a trained model, as a piece of code that has been received and data has been collected, we can retrain things and no longer use that model. Right. So but that doesn't make it financially smart, right? So a company is going to fight all they can. It's a whole lot cheaper to pay lawyers to defend this than it will be to retrain and remove all that and just keep getting the value that they're expecting out of it. So I think if, if the courts decide it, yes, technically we can put the toothpaste back in the tube as you said. Right. Because we will just use a different tube, right. Like we'll have to do things differently. And so I think there's a way to do it, but financially it's not in the company's best interest, nor is it in the best interests of innovation. The question is innovation versus your rights, you know?


Marissa: And, and let's be honest, ethics right? I mean, how is this technology going to be used in an ethical way? I mean, right now, some of the issues that we have in AI, is it’s being used for deepfakes. And deepfakes are particularly challenging if you are, let's be frank, a female, because a lot of what's happening in that deepfake area is  sexualized content, for celebrities, particularly women. So, what are the ethics of not having AI guidelines, and the United States is right at the cusp of thinking about what to do about deepfakes. And I hope we do something useful for those sorts of guidelines. But those ethics are really important to think about. Even if the genie is out of the bottle.


Tony: And there are certainly examples like that where there is a clear, you know, wrong approach and a clear right approach. Right. But it does seem like even for businesses and creators and individuals that, you know, have the best intentions and want to do things in an ethical and legal way, you know, there maybe is some gray area where the right choice is not always as clear.


Tony: And, you know, Steven, from from your perspective as a business owner, that level of uncertainty is is famously bad for business, right? And so, you know, I guess my question to you is twofold. First of all, from a business perspective, how do you navigate that uncertain environment? And then from a regulation perspective, what can be done to remove that gray area and kind of provide some clear guidance for folks?


Steven: Yeah, I think we're going to have to see the courts decide. We're going to have to see precedent. Once we have precedent, then we can make policies and kind of move things forwards. And, and, and businesses will be able to know where they can operate. That's going to take time. So I think what you're gonna see is businesses are going to start and businesses are going to fail, businesses are going to get acquired and things are all going to happen as these things happen and technology is going to change faster than policy. And that always has happened, right? Throughout history, technology moves faster than policy. And so we have to figure this out, and hopefully we're driven by good ethical standards and we follow these things.

bottom of page