content, Insights, Podcast

Wavelength – Interview with Ben Wodecki from AI Business about generative AI

Daniel Harrington

21 December, 2022

 

21st December, 2022 – We've got a very special guest for you this week to discuss all things ChatGPT and generative AI. Ben Wodecki, Assistant Editor for AI Business, speaks with Daniel Harrington to discuss:

  • What's made ChatGPT an overnight celebrity.
  • Concerns about bias and misinformation.
  • Incoming copyright and IP infringement lawsuits.
  • Potential use-cases and the competitive landscape with Meta, OpenAI, and DeepMind.
  • What's coming next in the field of Generative AI.

Listen to this episode of the podcast here.

Find our last episode here.

Please enjoy and stay tuned for the next edition of Wavelength! If you're interested in any of the topics or articles that Ben brought up in this episode, please find the links and a full audio transcript below.

 

Sources:

Thought leaders Emily M. Bender and Grady Booch.

The AI Business Podcast

Francis Gurry, former Director General of the World Intellectual Property Organization (WIPO)

Westminster Forum on Next Steps for AI in the UK

 

Audio Transcript:

 

Daniel Harrington  0:00  

We've got a very special guest today. Ben Wodecki from AI Business is here to talk with me about ChatGPT and other similar generative AI models. Thanks for joining me today, Ben.

 

Ben Wodecki

No, Dan, a pleasure. Thank you for having me.

 

Daniel Harrington

No worries. So the first thing I kind of wanted to get into today is the way AI ties into business. And of course, the media, I wanted to talk about, from your perspective, how you see the differences between ChatGPT and other generative AI models? Or maybe if you even do see significant differences?

 

Ben Wodecki

Yeah, thanks, Dan. And expertise is a loose word, of course, anyway. ChatGPT, the reason why it's hot right now, and we'll come on to that a little bit, is because it's free, it's new, everyone gets excited by it. Generative AI is nothing new. I always like to think that, you know, it's not just text, it's not just images, look at Nvidia's GauGan. Everyone was using that this time last year, and I can't even remember it now. So ChatGPT comes in purely in the difference because it's free at the moment, that's probably going to change in the future.

I think there's probably going to be a subscription model but that's another topic. But it's simple to use. That's why it's really kind of taken off. So the underlying like nitty gritty of it that ChatGPT, is built off Instruct-GPT. That was published back in January. And instruct GPT is kind of designed to provide detailed responses to instructions. So that's why when you put something into ChatGPT it's going to give you a little bit more detail than, say, BlenderBot 3 might, and then to further kind of improve ChatGPT, it was fine-tuned with GPT 3.5, which is a revised version of OpenAI's flagship model which shares the name, but it's a little bit better at generating detailed text.

That fine-tuning is why it provides so much better responses than, say, it initially would have done back in January. And that refinement works well. But not all the time. Yeah. And we'll come onto why this isn't a golden goose in terms of chatbots. And I went into the final detail when I was writing about it for AI Business. OpenAI touted a reinforcement learning technique that, to quote,  'provides substantial reductions in harmful and untruthful outputs'.

Now I found that quite humorous, because GPT-3 famously had lots of problems because it was one of the earliest large language models. It was riddled with bias. As an example, we went to, myself and a former colleague of mine Max Smolaks, went to a play that was supposedly written by GPT 3. And there was a gentleman, one of the actors. who was anAsian ethnic minority, and gave him the role of a terrorist in some exam instances. And it was very hard, very aggressive imagery. So yeah, I've found that very interesting that they've decided they need to fine tune that.

As an aside, GPT 4 is looming. It's very coming very soon. So I'm not surprised that ChatGPT has this reinforcement learning option to sniff that out. But as I said, that's how it's built, we'll get into more kind of the why it's gone viral and the things behind it.

 

Daniel Harrington

Yeah, thank you for that. It's really interesting to get your perspective on the long-term history and how these models can tend to sort of come and go, and a bit of a media storm gets raised. And I feel like from our perspective, we saw a lot of that similar thing around DALL-E 2 when everyone was creating this crazy art. Maybe in the opposite way from ChatGPT, the fact that no one could get on the waitlist for a really long time might have built anticipation.

But yeah, I think you're completely right to go into the bias and accuracy, maybe misinformation concerns, with generative models like this. Because a big part of, I guess, the reinforcement learning from human feedback (RLHF) and human-in-the-loop type systems is that, in order to tell these models what a good response is or to kind of assign a complex human value to an objective data set, implicitly, would you say we reinforce our own biases through that system?

 

Ben Wodecki

I would say so. I think what you've got, and that you picked up on, was the misinformation point. And I think that's, that's poignant. In terms of what we're discussing. Just a couple of weeks before ChatGPT came out. We had Galactica for Meta. It lasted a weekend before it was taken down, because it was spouting out nonsense and falsities around scientific papers. And I was giving a speech at the Westminster forum policy event online about the next steps for AI in the UK. And I described it as the biggest tool for potentially spreading misinformation in our generation. It's true, not just, and when I say misinformation, I'm not saying Trump won. I'm saying, well, some bots have done. I'm saying this purely on the basis of spouting out incorrect falsities in terms of science or things like that. That's the problem that needs to be addressed because it's built in the underlying issues. So, for example, when we've done something where we've used ChatGPT to build a holiday advert, and you can read about that on AI business from 23rd of December, it only gave us Christmas-related things. So we asked it for holiday-only content.

And the advert that it generated was about a little girl wanting a dolly, which reaffirms stereotypes of young girls. So it has that underlying issue that still won't be ironed out for a very long time. And the interesting thing about it, going back to the Westminster event, was people are starting to take note. So at that event, the ICO, The UK Data watchdog, the gentleman from the UK Data watchdog admitted that they're watching this, they're watching the space, they're actively monitoring for any potential issues. And they've been very, very active in terms of AI, you can see that from their very effective enforcement against ClearView AI, although that's being disputed by the company, of course.

So I think that is a big problem. And the other main issue is the legality. And the big thing about the legal side of things, and it's come from the images, and it's going to spill out into the text as well. And it's copyright. As far back as 2019. This has been an issue. So this predates ChatGPT, predates every other language model that we've been talking about so far. And I remember when I used to cover intellectual property for about three, four years, and I remember at an WIPO event in 2019, on AIP, Francis Gurry, the former director general of the world IP Office (look him up, he's got quite an interesting Wikipedia) said that said AI was more akin to authorship not inventorship. And there's a big dispute right now in the patent side of things. So I think this leans into it, in terms of okay, I reproduce something, but if it's called something called a licenced image online without consent, and then generated something new. A) is that infringement, and B) is that worthy of its own copyright, because it's created a new work?

 

Daniel Harrington

I was gonna say, from my angle, having used ChatGPT a fair bit, the thing I was trying to nail it down on, was it saying 'I'm unable to create original content'. For me, like you said, it's the copyright and authorship debate that interests me – if neural networks and deep learning are designed to simulate the human brain, how is that different from a human reading a copyrighted work, and then putting a couple of things together in their own mind and recreating a combination?

You know, it's that quote: a good author borrows, a great author steals. And, you know, is it copyright infringement because there's not enough of a digestion process, if you will? Is the link too direct?

 

Ben Wodecki

Yeah, there's so many IP lawyers that I work with. So excited that I'm talking about this right now. No, please don't apologise. I love IP law. I always have done and always will, because this idea of fair use, how would you extend that to AI? We're going to be talking about this for five years, because the rate, the pace, at which the law, regulation and governance moves is so slow, that we could have ChatGPT 3, by the time we get around to this.

What you've got to remember is this year alone, we've had two stable diffusions, Stable Diffusion 2 came out very recently, tail end of 22. You know, this is moving so fast that even us journalists can't keep up.

Daniel Harrington

That's saying something.

 

Ben Wodecki

Yeah, exactly.

 

Daniel Harrington

We moved on from misinformation into IP and authorship. I also wanted to talk about a couple of the associated media cycles that have been surrounding generative AI. And that's the news around Stack Overflow banning the responses from ChatGPT, for instance, and I pulled out a couple of quotes where they said, you know, the average rate of getting correct answers is too low. The posting of answers created by ChatGPT is substantially harmful, which I think you mentioned substantial harm. It just shows that maybe the danger isn't always, is this model correct or not? It's is it correct or not, and how many people can create answers in what volume?

 

Ben Wodecki

Yes, StackOverflow were very fast at how quickly they moved on, kudos to them. And it boils down to this idea of coding. The coding aspect is really interesting, because although ChatGPT can like, you can ask it for a response from an instruction, also it's designed, quote, unquote, to give you tips on coding. And that's very interesting, because then you get into the idea of Codex, copilot, which is currently ongoing at the moment – they just launched. So GitHub, along with OpenAI, who built ChatGPT and helped to build copilot, just launched for business on a subscription model.

And yet it's currently hit with a lawsuit over claims that it again, reproduces code without permission from the owners. This is copyrighted code. And I spoke to one of the lawyers who is behind the lawsuit. And when they push copilot for business, they said, 'Oh, you can, you can design it in a way so that people who use the tool can't see things from public repositories'. And, you know, he turned around to me and said, 'well, it actually doesn't include any significant technical engineering improvements'.

So it just, you know, this, this is something that's already been hit with a legal issue on the coding side of things, Stack Overflow was right to move quickly, I think Getty Images, on the images side, were very fast to say hold on a minute. But then Adobe and Shutterstock both zoomed forward to try and secure generative AI. So seeing some businesses hold fire, and some kind of push for this is quite interesting.

My favourite personally is Deviant Art. On the art side of things. Deviant Art is the internet's largest online art community. They banned AI art and I think you guys talked about kind of online art in a previous episode in terms of banning, I think the fact that Deviant Art did it is quite, it's quite a thing. quite a feat. But yeah, I think it's not the technology that's exciting. It's so dull. It's the law, that is actually the most exciting thing, because of what it generates.

 

Daniel Harrington

Yeah, I guess you're right. That is where kind of the drama comes from. Because obviously, within a set population, there's only a certain number of people who understand sort of the technicalities of the parameters and the tokens and the labels, and you know, few and zero-shot learning and all of that kind of stuff. But the average person can understand the concept of 'we've fed in copyrighted information into this machine, and it's coming out with a slightly different version, is that still copyright?'

It's like you said, ChatGPT itself is accessible, but so is the discussion around it.

 

Ben Wodecki

And that's what's made it so viral. That's what's really exciting. Because it's free. It's easy to use a simple API plug in or even just typing it in on the actual external website, you just need an access to the actual open API, because it's so easily accessible. People can use it themselves, and it generates bars, it generates hype, but they don't understand the underlying aspects of it.

So what I thought I would do is I would, as I did when I was learning about this from a journalist perspective, I wanted to provide some really thoughtful thought leaders, I think that your listeners should go and read up on how this actually affects people - so I'm a big fan of these following two people. That's Emily M. Bender, professor at University of Washington. She's on Twitter. She's wonderful. What Professor Bender does is really boil it down into the actually what it does in threads. And Professor Bender described it as ChatGPT is designed to create confident sounding text. But questions will, actually, if it sounds confident, will the lay person understand if it's any good, just because it sounds confident? I think that's fair.

The other person is Grady Booch. Grady is one of the trio – great name – and he's also one of the trio that developed the unified language model. Grady is like the most sceptic guy, I know. In terms of AI in tech, right. Myself and people from the old AI business podcasts we call ourselves the real tech sceptics. Grady's, like, incredibly sceptical about this. He described it as I'm reading this as a quote, 'remarkably coherent prose full of half truths intermingled with boldly stated falsehoods masquerading as truths to the outsider, astonishing to an expert. This is nothing more than well formed statistical nonsense at scale.'

He's got a way with words, exactly, and I think it shows that this technology has the ability to wow, people, but in actuality the underlying issue is it's just a chatbot.

 

Daniel Harrington

And I suppose that is the thing. It was released as a chatbot and Open AI accompany that with, you know, 'you can do a few things'. At least from my perspective, we saw a lot of the buzz coming from people on Twitter kind of posting, oh, I, you know, I got it to write poetry. Oh, I got it to write some Python scripts

And, especially when it's use cases that haven't been officially endorsed and where people are sort of finding them by itself, as long as it sounds confident, as long as it has that persuasiveness, it can seem this absolutely game-changing technology. Because you might not be an expert in what it's producing.

 

Ben Wodecki

That's right. We did the same thing at AI Business we, like I said, made a Christmas ad using it, but in actuality, it was better than the rest. But it wasn't exactly good. We asked for it to do it in the style of John Lewis. It was the most generic thing you could have ever written. But it was better than the others by virtue of default. I sound horribly sceptic, and I apologise to your listeners who want to listen to...

You know, among us journalists, we, we come at this from a perspective of negativity, because we're bombarded with, you know, 'this is going to change tech', 'this is going to change this'. But in actuality, we have to sit there and think about this logically, there are some things I'm excited about with generic AI, just for the record. For example, it enabled me to make a DND image for my character, I didn't have to use MSPaint. Oh, that was fine.

But like, from a tangible business use case, what is the business angle there? You can't make money off of, you know, a subscription for, you know, typing in an image. And here you go. You have to think of the tangible business use cases. Copilot makes sense. But it's done in such a way that it's already been hit with a lawsuit and needs rethinking? I think in the long term, it's great that there's a lot going on here that's to be excited about, but at the same time, taking that step back, and realising how it's built, and how it's being published and how it's being accessed is the important thing here.

 

Daniel Harrington

Yeah, definitely. And you brought up a really good point that I kind of wanted to dive into a little bit about the long term business implications, the use cases, kind of how the stands in that business landscape. Obviously, you're very well positioned to talk about this at AI business. 

We're talking about Open AI as, obviously founded as a nonprofit, but you would say, primarily a Microsoft entity with a big investment from their side? As opposed to the, to my knowledge, wholly Google-owned DeepMind. And you see Open AI are the ones who are coming out with the biggest, or the most well-known models, DALL-E 2, to Whisper, to ChatGPT.

But obviously, we have to imagine that DeepMind have similarly advanced models or at least are in the weeds of the technical details as much as them. So do you think maybe there's the potential that Open AI are releasing these things that maybe aren't totally finished, or sort of putting them out for the research stage, in order to gather that information from consumers and be like, 'Okay, what are people using this for most?' Content at work? Are there other people using this to answer questions to do coding? How would you see Chat GPT fitting into this competitive landscape?

 

Ben Wodecki

Yeah, that's a good question. I always look at it as the big three in AI, in my personal opinion: that's Meta, that's OpenAI and DeepMind, those are kind of the big players in this AI space.

And in terms of Open AI, you're right, they've put this out there. And it says, For now, it's going to be free. Anyone can access because they want responses, they want to see what it can do, mapping out to the public. What a great way of getting actual responses to your research, right? Imagine doing that. I did an MSc recently: I couldn't get responses on that for my survey. That is, it's the single greatest crowdsourcing way of getting answers to your research. I think that's genius.

And obviously others did it as well. Meta did it with Blender bot, what did they find out? It was a complete and utter failure. But they took that away, and realised, and are working on it behind the scenes. I think it's a sensible way of doing it. And it's got a lot of people excited, both from a business perspective, and from the general public as well. Because then, like with DALL-E 2, they can turn around and go, 'Oh, look, I made this', you know, anyone's Mum can go and do it. It's that simple. It just takes two minutes to generate.

And that's simply it. I think from a business perspective, I had a chance to speak to a lot of practitioners in our recent New York event, as well as in several interviews recently, and everyone is obsessed with this right now this area, everyone is excited by it. But when I asked them, Okay, tell me some tangible use cases. There's normally a pause, normally a break. And I think that's reflective of, it can do a lot, but what? And there's a great Omdia analyst. We're doing a podcast with him this week, Mark Beccue, and he dives into that deeply where he says he's like the most vehement arguer for a, 'what does this do from a business perspective?' Because do you think Open AI are gonna put this out there so people can have fun? No, they want to make money at the end of the day, this is a project that will eventually go on and make money, maybe not for Open AI, maybe for Microsoft, who knows?

It's got to have tangible business use cases. There are some: I think the co pilot coding is a great example. But it doesn't work well, currently. Based on what they're saying in the legality of it, I think it just takes time and the way that they're putting out their research in terms of okay, public, have fun with it. Tell us what it does well; tell us when it doesn't. So the best way that they're going to fine tune this product as a product.

 

Daniel Harrington

Yeah, definitely. It's always great to hear your insight, having spoken to people in the industry and seeing where that's coming in these in person meetings, especially at the AI summit in New York. And I think it's really interesting what you're talking about in terms of people getting very excited and then the use cases, for people at least, can come second.

It's always going to be drilling down into that. But maybe what we're looking at now is the stage that, like you said, Meta were trying to get to with BlenderBot or the, I forget who developed LaMDA, maybe Google?

 

Ben Wodecki

Yeah, they were.

 

Daniel Harrington

Yeah, maybe they were going to get to before their scientist went rogue and started claiming it was sentient.

 

Ben Wodecki

Yeah, I think I won't talk too much about the sentience case purely because that poor gentleman has been given a kicking enough this year. I think, don't get me wrong. blenderbot failed for Meta. But there are so many other good things that you could talk about with them. They challenge Deep Mind with the protein folding model they're doing, they're putting a lot of time and effort. I think from a, from a private standpoint, don't pursue them just because one chatbot went wrong. There are a lot of people doing a lot of great things.

And I think not all generative AI is good in terms of what it can do. And I think you've just got to actually do your research and figure out okay, this model is good for this, this one is good for this, you know, DeepMind are doing some great stuff on board games right now. And so is Meta as well, I think, you know, it's really actually reading into what they can do.

And I always say this, and I'll say this, your listeners, just because a model has more parameters than another model, doesn't mean it's better at doing certain tasks than another model. So you know, when you reached out to me originally, we talked about large language models. Fine, great, but the bigger it is, doesn't necessarily reflect on how better it is.

 

Daniel Harrington

Yeah, no, that's a really good point. Like you said, sometimes the bigger shovel doesn't get the hole dug quicker.

 

Ben Wodecki

I should know. I should know. I've done I've done a lot of holes in my time.


Daniel Harrington

No, yeah. And thank you for coming on today. Ben. It's been so so insightful to speak to you.

 

Ben Wodecki

It's been a pleasure. Thank you for having me. And apologies to your listeners. I'm a lot more happy and normal in person. It's just a very interesting topic that there's a lot to talk about. 

 

Daniel Harrington

I appreciate it. Thank you for your time. 

 

Ben Wodecki

No worries. Thank you.

 

 

Where can I find Wavelength?

Wavelength is available on all major podcast platforms. 

Spotify

Apple

Amazon Music

 

Attribution:

Music used under Creative Commons license: Covert Affair by Kevin MacLeod

Link: https://incompetech.filmmusic.io/song/3558-covert-affair

License: https://filmmusic.io/standard-license