All Episodes

331. Understanding AI as a Storytelling with Roy Vella

the daily helping podcast Oct 16, 2023

 

Is AI going to take over the world like in Terminator? Roy Vella knows the answer to that question and a few more advanced questions too. Roy is a digital strategy expert, AI expert, professional speaker, independent, non-exec director, advisor, and consultant to enterprises large and small. As a resourceful Stanford, JD, MBA, and US UK citizen, Roy has proven complex problem-solving skills and global execution, delivering high growth, digitally driven teams. He runs a management consultancy, providing strategic advice and commercial services to digital leaders worldwide, particularly as an expert in deep tech, FinTech, and smart home industries.

Roy tells us what AI is good at right now, and what it is not good at…. yet. He uses ChatGPT as an example. ChatGPT is good at figuring out what words tend to appear together, in what order, and in what context. That is because it has been trained on massive datasets called large language models (LLM). That makes iChatGPT good at storytelling. However, ChatGPT is not good at accuracy. So it may tell a great story, but it doesn’t know if that story has any accuracy or not. Therein lies the biggest danger, according to Roy.

Roy emphasizes that in a world with increasingly capable AI, our focus should be on critical thinking. We should be teaching students how to think through what are the best prompts or questions so they can pull the best output from AI. Because the amount of computation power that is coming our way will blow our collective human minds.

 

The Biggest Helping: Today’s Most Important Takeaway

Storytelling came through in our conversation about the risks of AI storytelling, but the flip side is also true. We are story telling and digesting creatures. I tell folks all the time, no matter what your job is, no matter what you're doing, knowing how to tell a good story is crucial to success. For all my advanced education, it's improv training that was probably some of the most valuable stuff I learned.

And the problem is we don't practice it anymore. We used to, you know, the sun went down, the lights went out and we sat around a fire and told stories to each other.

And we don't do that anymore. But the most successful people tell the best stories.

I mean, it's as simple as that– whether they're a politician or a CEO or leader of a church or whatever, right? It is the ability to tell stories and the practice of doing that.

You can't just read about telling stories, you gotta practice it. When I'm talking to entrepreneurs, which I do a lot, I'm saying you got to practice the story. You got to tighten that up. You got to tell me why I should believe that and how it's going to work.

Storytelling is almost the crux of all success, in my opinion.

 

--

 

Thank you for joining us on The Daily Helping with Dr. Shuster. Subscribe to the show on Apple Podcasts, Stitcher, or Google Podcasts to download more food for the brain, knowledge from the experts, and tools to win at life.

 

Resources:

 

Produced by NOVA Media

 

Transcript

Download Transcript Here

 

Roy Vella:
But the most successful people tell the best stories. I mean, it's as simple as that, whether they're a politician, or a CEO, or a leader of a church, or whatever. It is the ability to tell stories and the practice of doing that. You can't just read about telling stories. You got to practice it.

Dr. Richard Shuster:
Hello, and welcome to The Daily Helping with Dr. Richard Shuster, food for the brain, knowledge from the experts, tools to win at life. I'm your host, Dr. Richard. Whoever you are, wherever you're from, and whatever you do, this is the show that is going to help you become the best version of yourself. Each episode you will hear from some of the most amazing, talented, and successful people on the planet who followed their passions and strived to help others. Join our movement to get a million people each day to commit acts of kindness for others. Together, we're going to make the world a better place. Are you ready? Because it's time for your Daily Helping.

Thanks for tuning in to this episode of The Daily Helping Podcast. I'm your host, Dr. Richard. And I am really excited about our guest today because, frankly, he's a genius. His name is Roy Vella, and he is a well-known digital strategy expert, AI expert, professional speaker, independent non-exec director, advisor, and consultant to enterprises, large and small. As a resourceful Stanford JD/MBA and U.S./UK citizen, Roy has proven complex problem solving skills and global execution delivering high growth digitally driven teams.

He runs a management consultancy providing strategic advice and commercial services to digital leaders worldwide, particularly as an expert in deep tech, fintech, and smart home industries. There's so much more I could tell you about Roy, but let's get into it. Roy, welcome to The Daily Helping. It is awesome to have you with us today.

Roy Vella:
Much appreciated. Happy to be here.

Dr. Richard Shuster:
So, I'm excited about this episode. We chatted a little bit what are we going to talk about today, and we've never done an episode on this show about AI. And AI is everywhere. People are worried that they're going to lose their jobs because of AI, that AI is going to take over their defense grid and shoot missiles at us like in the Terminator. So, I'm excited that you're here to tell us the real deal about AI.

But what I want to do before that, and I do this with all of my guests, is I want to hop in the Roy Vella time machine and take us back and tell us what put on the path you're on today.

Roy Vella:
Sure. Let's see. So, geographically I'm a Brooklyn kid. I'm born and raised in New York. I have about a dozen years each, New York, the Bay Area, London. And now I'm outside Boston for about six years. Always been entrepreneurial, you know, a little entrepreneurial ventures with my brother when we were in high school, that kind of thing. And so, I sort of found my heart in San Francisco. And after grad school started a fintech company with some classmates, ended up selling that, and getting recruited by another classmate into PayPal in the early days of PayPal. And they're the ones who sent me to London.

And I would say over time, you know, I am just insatiably curious. My wife likes to joke around that I'm an inch deep, but a mile wide in all of science. I can talk pretty impressively with any scientist for a good 15 minutes before I fall off a cliff, I'm like, "Yeah, that's all I know about your area." But I'm just curious. I love to learn new things. I'm in a continuous state of learning. When new technology or new science is published in any way, I tend to have my finger on that pulse and I enjoy learning about new content. And that is probably the biggest motivating driver of my career, in my life is that I'm just insatiable in that regard.

Dr. Richard Shuster:
Want to know. Got to know. Got to know. I love it. So, you know, AI is interesting because it's not new.

Roy Vella:
No. No. Not new at all.

Dr. Richard Shuster:
It's new to most people, but it's not new. So, let's do AI 101. Let's start about where it came from, where it is today, and then we'll keep going.

Roy Vella:
Yeah. Well, I mean I'll tell you, it's not new but it feels new because we are on an exponential curve. So, exponential curves, for folks who don't know what that is, think of a hockey stick. So, it's pretty flat for most of the curve. And then, when you enter the elbow, it starts to go vertical very quickly. And you can look at AI or even technology adoption, writ large, the printing press, 1400s was when the first printing press became mass produced - not even mass produced, became available.

And we have hundreds of years of innovation and development that has gone vertical in really the last 10 to 20 years. And AI started in the '40s. I mean, really when we started to get being able to compute at scale and starting to have the ability to have digitization occur, we started studying AI, and we have entered the elbow, which is why for a lot of people it feels (A) like it came out of nowhere because a flat curve isn't very exciting when you're on it. There's incremental development, and there has been incremental development, since the '40s. But, now, we've entered an age of scale processing.

And I'll tell you, part of the reason of the surprise is that we, humans, are very bad at scale. We're not good. We're good pattern recognition machines. We're not good at comprehending and appreciating scale. And I can actually prove that very quickly. So, a million, a billion, and a trillion seconds. So, a million seconds, it's about 12 days; a billion seconds, 32 years; a trillion seconds is 32,000 years. And most people I note that to go, "Wait. What? Huh? Wait. Say that again. Twelve days? Six times written history?" And, you know, yeah, it's times a thousand. So, a computer goes, "Why are you confused?" But for us, it's like, "Wait a minute. That's unanticipated."

And that's where we are. We have three different elements, the computing power, the data set size, and the algorithms that are processing those data sets. And all of those are all on exponential curves, like million, billion, trillion. And all are colliding to enable processing at a scale that we just have never seen before.

Dr. Richard Shuster:
I think you just answered the question I was about to ask. My question was going to be why now the exponential curve. And it sounds like it's the processing that these computers can do is unlike anything we've ever had.

Roy Vella:
Yeah. I mean, some of your listeners have probably heard of Moore's Law, every 18 months are doubling. But it's kind of Moore's Law writ large. It's in everything, in the size of the data set itself, in the computing power, in the battery power, in everything, all of these technologies are going together. And it's that collision together that enables us to process at a scale that doesn't make sense to the human mind. We have real challenge thinking in trillions or even billions.

And there's tons of fun examples about how bad we are at scale. So, for instance, we're closer in time to Cleopatra than she was to King Tut. People go, "Wait. No. Ancient Egypt, that's a thing." Yeah, it is. But it's a long period of time. It's also why people can't really comprehend evolution. We're talking about millions of years and it's really hard to get our head around it.

So, AI is a surprise to people who haven't been in AI or are aware of the studies that started, literally, in the '40s and '50s. And what's happened now is we can process very large data sets. So, OpenAI, the folks who put out ChatGPT, their data set is in the billions and trillions, and that feels miraculous. Which is why I'll tell you, when people interact with any of the LLMs, LLMs or Large Language Models, the operative word is large, like really, really large. It is choosing the next word to put forward based on every other word. And I don't mean like every other word that you just interact. I mean every other word. It has a data set and has learned off of the basis of trillions of data points and that's only growing.

So, it feels like magic to us the way it interacts, and the way it writes, and the way it gives us good English, English that's grammatically correct. But, really, it's just doing that at scale. And it's bad at things that it's not designed to do. It's not particularly good at spelling. It's not trained on letters. It's trained on words. It's not particularly good at math. It was not trained in mathematical language. It was trained in English mostly.

And by the way, all of these things that I'm bringing up now are going away. There are LLMs in thousands of languages that translate now seamlessly between languages. Mathematics is being addressed. Coding, I mean the coding that LLMs can do is staggering.

Yeah. So, I would say that's why people are surprised and why it's not going away anytime soon. We're on an exponential curve and we're nearing the vertical and it's only going to accelerate.

Dr. Richard Shuster:
It's interesting, I recall the first time I used ChatGPT. And there was a child-like wonderment I had when I just saw text exploding onto the screen that was so good. And as the text was populating, then I just said, "Oh, crap." There's a downside to this technology and ethical concerns about this technology. Skynet immediately popped into my head.

So, talk to us about the rails, like what safeguards are there? What should be done? What is being done? What could be done?

Roy Vella:
Sure. So, I would highly recommend anyone listening to read Yuval Harari. He wrote Sapiens and then he wrote a book called Homo Deus. And he has recently been talking about the fact that AI has hacked our OS, our operating system. And what he's referring to is storytelling. So, Yuval, by the way, brilliant scientist, brilliant researcher, brilliant writer, great books - the ones I mentioned. He wrote a third book, it's more about the future and I'm blanking on the title of the book.

But what Yuval will tell you is that the danger of AI is that it is doing what humans have been doing pretty much since we learned to communicate, which is tell stories. And he's like, the foundation of human civilization is storytelling. It is the ability to tell a convincing, persuasive story to another human. And AI can do that incredibly well.

And he's like, look, our world is based on it. More people have died due to storytelling than any other thing, religion, money, money, laws, regulations. We tell each other stories constantly about what's acceptable, what's not acceptable, what we feel and believe about the world and parts of it. And now we have a new intelligence that's pretty damn good at telling a mean story, and that's worrisome to Yuval and to me. I mean, that's the truth.

And the guardrails that you're talking about, I mean, what Yuval has suggested, which I think is a very good suggestion, is that AI is not allowed to fake being human. And what I mean by that is an AI has to always announce that it's an AI. When you begin an interaction on Twitter or X, whatever you want to call it, or on any platform, the AI is responsible to tell the user that it is an artificial intelligence. It is not a human. It is not allowed to post as if it's a human. It's not allowed to interact as if it's a human. And it's not allowed to tell stories without being clear that it is an artificial intelligence system.

I think that would help tremendously, because people would have a different opinion than if they thought they were interacting with a fellow human who was telling them a story. But it's still dangerous. I mean, there is still risks that come from having an artificial intelligence. And by the way, the biggest risk is that it will be smarter than us. So, the smarter than us is, if I could hook my brain directly to yours, Dr. Richard, and we could physically link our brains --

Dr. Richard Shuster:
We could do some damage, Roy.

Roy Vella:
And that's what it is, right? This is a hardwired link between all human knowledge.

Dr. Richard Shuster:
Yeah. I mean, I think we're on ChatGPT-4 right now.

Roy Vella:
And five is coming.

Dr. Richard Shuster:
And five is coming. And each iteration gets progressively better. So, what does this thing look like in 2030? Sure, they're going to be smarter. I read that it'll be smarter than humans by 2029. And by 2036, I think I read, it'll be smarter than every human on the earth combined. That's the curve you're talking about.

Roy Vella:
Yeah. The numbers are dramatic. There's no question.

Dr. Richard Shuster:
So, let's talk about people listening to this, let's give them some positives here. How can we positively use AI in our careers? And I want to start there. And then, my second question is, what are the careers that you would have concerns about not existing in five to ten years because of AI?

Roy Vella:
Yeah. I mean, look, I have three teenagers, and I tell them ChatGPT and any LLM - by the way, it's not ChatGPT. There are hundreds of LLMs now and never mind the Visual Arts and Midjourney and DALL-E, there's so much going on in that regard - I would say the initial wave of exceedingly strong value is in zero to one. So, going from a blank page to something. We probably can't quantify how many wasted years of human productivity there are of looking at a blank page, of trying to start a project, an essay, a paper, whatever it is. And we go "Huh" and people stare at a blank page.

What the LLMs can do for you right now is give you something to react to. So, you can set up a prompt that produces something that you go, "Yeah. No, I need more of this, less of that." And I tell folks, it's like on the 80/20 rule, it'll get you maybe 80 percent of the way there. What your job is to edit it and then refine it and make it become what you want the rest of the 20 percent of the way there. At this point, there's exceeding amounts of value in that, in getting productivity, in jumpstarting, in building momentum, in inertia, in going from zero to one. I think that that's a huge value presently today. As it proceeds, I mean, it's going to get better and better.

So, what jobs are safe? I don't think any job is safe. I think it is somewhat ironic that when robots were taking manufacturing jobs, people weren't freaking out so much. But then, all of a sudden, when it's like, "Oh, knowledge worker jobs," when the lawyers are under threat and everyone's like, "Whoa. Wait a minute. Hold on a second, " that is true. The reading and writing and logic stuff is somewhat at risk.

Again, we should be clear, an LLM is not trying to be accurate. It's trying to be believable. So, that's part of the hacking that Yuval is talking about. It's the next best word given all prior words and their weighting and importance in the context of what's happening is what it's choosing. It's not trying to be accurate. So, it'll describe someone. You ask it to describe you and it will say that you went to these schools and did these things that isn't right. It's not trying to be fact checking.

It's knowing what it finds about Dr. Richard on the Web, it says this is what his life should look like or could look like or might look like, not what it was. And you've already heard lawyers got in trouble because it's cited cases and stuff that don't exist. They used it. They didn't fact check it themselves. It was very believable. It just happened that they hit a judge that was like, "I never heard of that case."

So, I would say two things. The most important part of ChatGPT is the word chat. So, this is an interaction. When OpenAI released ChatGPT, you heard all these critiques, "Oh. I asked it this question, it was wrong." It's not a search engine. We're all so programmed into the world of search engines. We query response and then we validate whether the response is accurate or not. That's not what this is trying to do. It is not a query search response. It is an iteration. As you interact with an LLM, it will get better at understanding you, and what you want, and how it works in the context of what you're asking for. So, the people who are doing best with it as a tool are the ones who are engaging it in conversation. Like, literally, that's what it is.

When you and I first met, we know nothing about each other. We start on, you know, middle ground, temperate ground. We talk, we feel each other out. And as we engage, we get better and better at interacting. It's exactly like that with an LLM.

And this is what I find interesting, we're going to enter a world where asking the right question is more important than knowing the right answer. And people who are better at asking the questions are going to do better.

And this is the other thing I would say, is prompts. We tend to think of a question, if I want an LLM to do something, we tend to treat it like a human, which again is a mistake. We give it a short sentence or maybe we give it a paragraph. I have seen prompts and used prompts that are pages long. So, you can give the LLM as much context and as much specificity. I want you to act as if you are a professor at MIT who's an expert in this, this, and this. I'm asking you about this. You can provide as much - in fact, the more you provide, the better your answer will be. And then, when it responds, you say, I need you to be more concise. I need you to expand on this part.

It is an interaction where the more direction and directive you can be, the better the response will be. Better in the sense of getting what you want. Not about accuracy again. You got to check for accuracy, especially if you're asking for facts and things. Like, no, no, no, it's not going to produce facts. That's not what it does.

Dr. Richard Shuster:
What about other forms of AI? I want to put on our futurist hat here a little bit. So, let's go a few years in the future. We're rocketing up that exponential curve and we're nearing maybe the tip of the hockey stick. Based on everything that you've researched and the people that you've talked to in this space who are experts as well, how does AI show up in our daily lives three, five, seven years from now?

Roy Vella:
I mean, again, it is able to do scale processing at a level that we can't really conceive. So, you may have heard of the protein folding efforts that they're using AI now, basically generating every possible protein that's possible from amino acids. Incredible. Again, scale processing. Something that a human couldn't do. Any scientific effort where you have a scale problem, where there's too many options, applying AI is really, really interesting.

And, actually, I should say I think we chose the wrong A. So, artificial implies fake, plastic, not real, not valuable. And it just happened to stick back in the day when there was only a few researchers working on it. There are better As. I would say, maybe augmented intelligence or accelerated intelligence or even alien. I mean, Yuval Harari says it's an alien intelligence. It is a non-human intelligence that we have created. Artificial seems to poo-poo it a bit for us. We're like, "Yeah. It's not really real," and that's a mistake.

And the reason that comes to mind in the context of your question is, any effort that people are making on a scale processing problem, AI will help. It will augment. It will accelerate, whether it's protein folding or reading a mammogram scan or any large data set. Now, we have scale processing, so we're going to solve a lot of medical issues, of economic issues. If you let AI address these problems, I think there are going to be some really fascinating and counterintuitive solutions presented.

And by the way, we've already seen that at a really, really simplistic level in games. In both Chess and Go, the AI has made moves that masters said, "Wow. I've never seen that before." That's a move unlike any move I've ever seen. I've never seen a human make that move. And the Go master who lost said - I'm indirectly quoting, but he basically said, "I need to study Go more. I thought I understood Go. And now I'm not sure that I do." Because the AI made moves that were unanticipated and no one's ever made before.

And what's fascinating about those things out of simplistic game based level, an AI can beat an AI, an AI can beat a human, but the human AI can beat an AI. So, right now they have teams. It's a team of a human and an AI and a human an AI playing chess, and you can't predict who's going to win. It's the combination of the types of intelligences that are working together. And a human AI can beat an AI, no problem, which is fascinating.

And for me, that's a good story because it is about the combinatorial value of these two different approaches. And we're going to see that in all sorts of unanticipated ways, too, I mean, just like the Go player was surprised. AI applied to problems is going to present solutions that we simply haven't thought of yet, or didn't occur to us, or wouldn't have occurred to us. And we go, "Huh? Oh, that's interesting."

Dr. Richard Shuster:
The mammogram thing is interesting because it would beg the question to me, if diagnostics. And I suppose you could program a robot to do a surgery, right? It's just moving scalpels and lasers and such.

Roy Vella:
Yeah. We already have that.

Dr. Richard Shuster:
It make me wonder if going to medical school is a good idea moving forward. And, honestly, it's interesting to go back to the not so much ancient Egypt, but we have always tiered titles and, as a consequence, income potential based on the amount of knowledge that an individual has accumulated. And now we're living in a world where knowledge is everywhere. It's no longer this kind of turnkey lock down system.

Roy Vella:
Knowledge became widely available with the internet and now it's accessible with LLMs.

Dr. Richard Shuster:
Yeah, it's really interesting. I mean, I was actually having a conversation with somebody if college is even necessary anymore, outside of the football experience.

Roy Vella:
Well, the reality is we are still doing industrial age education. I have three teenagers and I had to buy them Texas Instruments calculators for their math classes because they didn't want them to use their phones. And one of my favorite tweets in this education realm was a guy said, "Look, we need to be training kids on how to search effectively and how to vet the information that they receive." It's not about knowing information. It's not about rote knowledge and regurgitation of knowledge. It's about finding knowledge and validating the value of that knowledge, is that accurate. It is the creative problem solving around it.

And it was a great tweet. I think it's not wrong. These kids need to be able to find information and assimilate and critical problem solving around that information. And AI is yet another tool in that area to do that more effectively, more efficiently.

Dr. Richard Shuster:
Absolutely.

Roy Vella:
One other book that I would raise for people is called Life 3.0. And I think it's a very interesting way of looking at the world. Max Tegmark is an MIT professor who wrote it. And very simply, Life 1.0 evolves hardware and evolves software. That's pretty much all of life. Life 2.0 is us. We evolve hardware but we design software. You and I have been enhancing each other's software in this engagement. Every interaction you have with another person, when you learn something new or teach something, we're upgrading our software. And Life 3.0, according to Max, is we design software and we design hardware.

And we're kind of at two-and-a half, right? You can get pretty good alternative legs if you don't have any or whatever. We are enhancing our hardware. Not completely yet. You know, the last step is downloading my brain into something else, and that's an interesting thing.

Dr. Richard Shuster:
I think people are working on it. That was really good stuff. Take what he just said, The Life 1.0 to 3.0 and paste it somewhere appropriate into the episode because that was good stuff.

Awesome. Well, I knew this would be just a great chat, Roy. It was fantastic. I think, like anything else, there's good and bad with everything. But I want to ask you this, as you know I ask all of my guests just this one question, and that is, what is your biggest helping? That single, most important piece of information you'd like somebody to walk away with after hearing our conversation today.

Roy Vella:
I would say storytelling came through in our conversation about the risks of AI storytelling. But the flip side is also true, we are storytelling and digesting creatures. I tell folks all the time, no matter what your job is, no matter what you're doing, knowing how to tell a good story is crucial to success. For all my advanced education, it's improv training that was probably some of the most valuable stuff I learned about. And the problem is we don't practice it anymore. We used to, you know, the sun went down, the lights went out, and we sat around a fire and told stories to each other. And we don't do that anymore.

But the most successful people tell the best stories. I mean, it's as simple as that, whether they're a politician, or a CEO, or a leader of a church, or whatever, it is the ability to tell stories and the practice of doing that. You can't just read about telling stories. You got to practice it.

And so, in my daily helping and when I'm talking to entrepreneurs - which I do a lot - I'm saying you got to practice the story, you got to tighten that up, you got to tell me why I should believe that and how it's going to work. Storytelling is almost the crux of all success, in my opinion.

Dr. Richard Shuster:
Beautifully said. Roy, tell us where people can find you online and learn more about what you're doing.

Roy Vella:
Yeah. Roy Vella at anything sort of gets to me. Just my name on LinkedIn/royvella, or Twitter @royvella, or royvella@gmail, or whatever. I'm easy to find as one word because I have a fairly unique name.

Dr. Richard Shuster:
Perfect. And we'll have everything Roy Vella in the show notes at thedailyhelping.com as well, so you can get your fill of Roy which we all need. In all seriousness though, Roy, this was awesome. Thank you so much for coming on The Daily Helping. I loved our conversation.

Roy Vella:
Thank you, Dr. Richard. My pleasure.

Dr. Richard Shuster:
Absolutely. And I also want to thank each and every one of you who took time out of your day to listen to this. If you liked it, if you learned something from it, go give us a follow and five star review on your app of choice, because this is what helps other people find the show. But most importantly, go out there today and do something nice for somebody else, even if you don't know who they are, and post in your social media feeds using the hashtag #MyDailyHelping, because the happiest people are those that help others.

2167415948

There is incredible potential that lies within each and every one of us to create positive change in our lives (and the lives of others) while achieving our dreams.

This is the Power of You!