Nigel Cassidy: Artificial intelligence is changing how we work. So how can you best adapt to reap the benefits and what are the risks. I’m Nigel Cassidy and this is the CIPD Podcast.
NC: Until now HR really only had humans to worry about – the clue was in the name Human Resources. But suddenly AI is everywhere. Open access to tools like Chat GPT3 and 4 have opened the floodgates. Like it or not the workplace machines are coming. In fact, we thought we’d tried some text to speech AI ourselves.
AI: Hi Nigel, hi everyone. My name is Emma. I think you’ll find computer models like me are already saving organisations precious time and resources. For instance, I’m really good at screening the best job candidates. I can revolutionise your staff analytics, your training, or development. Or maybe I could host the rest of this podcast for you.
NC: No thanks. I actually like my job, but can you explain what it is you actually do in simple terms?
AI: That’s easy. My strength lies in automating processes that previously needed a lot of brainpower, time, and effort. I can comb your enter database or indeed the whole of the world wide web. I then analyse the text or data and mimic the problem solving and decision-making capabilities of you humans. I have access to more knowledge than any human has ever known – that should free you up to do other tasks but please note I have no consciousness, well, not yet.
NC: Now what we just heard there was pretty basis AI synthesised speech from text but today we want to focus way beyond that to discover what the arrival of large language models – GPT3 and 4 and other AI capabilities mean for your organisation. I mean just how do you work out where AI might speed your business. How do you rewire or redesign things to get the productivity benefits and understand the impact on the work experience itself. Joining us firstly the Chief People Officer of the Talent Management Organisation, Beamery, in all her HR roles she has focused on harvesting technology and data science to make the experience of work better. It’s Katie Obi.
Katie Obi: Hi, Nigel. Thanks so much for having me.
NC: With a sociologist with a deep professional interest in the impact of automations – she’s a senior lecturer at the University of Strathclyde Doctor Kendra Briken. Hello.
Kendra Briken: Hi, Nigel. Good to be here.
NC: And we welcome the CIPD Chartered Fellow who recently took the organisational development lead shifting the culture at CYBG which is the Virgin Money Group. He’s big on purpose and values. It’s the founder of the Green Juniper Consultancy Francis Lake. Hello.
Francis Lake: Hi, Nigel. Good to be here.
NC: So, Francis Lake, I mean we all have sense I think that something has changed recently – we’ve brought AI to a kind of tipping point.
FL: I think the big thing that’s really changing, at the moment, is that AI is suddenly in everybody’s hands. And so, you’re seeing you know thousands and thousands and thousands of experiments on an individual level. I then think there is another thing that is sort of right on the cusp, which is ChatGPT being built into Bing also into Microsoft Co-pilot and I’m particularly excited about the prospect of being able to use AI to produce my PowerPoint presentations probably ten times more quickly than I’ve been able to in the past.
NC: So, it’s like we’re all getting a taste of it -
FL: Exactly, yeah.
NC: It’s also power – it’s no longer just decoding people – the IT department that are having all the fun with it. Erm – come back to you in a minute. Francis. Katie Obi, can you remind us I mean what kind of business areas is AI in most use already and maybe can you give us a few examples of the newer kind of applications that your clients are going for?
KO: Yes, so I think AI has been around for a while in various different formats as well and I think one of the areas that has really embraced it fairly recently is really around data science as well and being able to get insights into the vast amounts of data and information that we have in organisations now. But I think as Francis was saying one of the big changes recently is getting AI into the hands of everyone and the difference that that makes as well in terms of the functionality like GPT4 to be able to take that to the next level. So we’re starting to see AI and language models come into all sorts of enterprise applications with really interesting search functionality to be able to bring out more insights interpret what’s in the data there and really help us to move that golden, that golden transition that we’ve all wanted as organisations to move from being to collect and report on the data to actually be able to make different decisions and drive actions and be much more prescriptive about how we use that. We are seeing a lot of different changes as Francis mentioned in terms of productivity as well. So, our ability to be able to be infinitely more produce than we were even a year ago with some of these technologies – whether it's creating our PowerPoints – whether it’s creating our eLearning. Whether it is being able to understand more about skills data and what’s going on both in our organisations and in the wider world.
NC: Kendra Briken, it is pretty scary I mean we’ve got the whole of the world’s knowledge at our fingertips. But you were telling me before this podcast the people in organisations do seem to have a bit of a basic confusion or a lack of knowledge about AI capabilities.
KB: Yeah, thanks, Nigel. I think the interesting thing is already in the framing – you know the floodgates open, and we talk a lot about we but who is actually we? And Francis you made this point it’s now far more accessible than before. But also, it is still channelled through companies and big monopolies like Google and Microsoft. We will see what will happen with Apple and Amazon and I think that’s interesting part, isn’t it? Because we could say that, oh, with search engines we already had this. But everybody knows by now maybe that this was channelled through advertising money and what have you. But now we don’t even see that anymore. You sign in, you type something in, get something back and it feels really as if you are talking to someone but then it doesn’t, you know, it has its limitations. But still, based on the massive resources that went into it in terms of energy consumption and the like and server accessibility of course these companies can deliver the service and it will be interesting to see by the way what will happen to this in terms of accessibility when the pricing models get in, when you can’t do anything about that anymore. So that’s for the one confusion I think, and the other confusion is I think between how we overlap what is happening in the realm of algorithmic management I think Katie, Francis you know about that. That’s been with HR for a while already, right? And now we call it artificial intelligence and you know overlap large language modelling with very basic algorithmic management and I think that’s interesting to see as well because I think we will see rebranding of tools that will be sold to companies – offered to companies. New market for consultancies which is always also good, but I think -
NC: More snake oil salesman – that is always the case with any advanced technology, isn’t it? That someone will come in offering to sell you something -
KB: It just seems to be you know the claim now seems to be that its innovative and that you can’t do business without – and I think that is really important. So, technology here is a big promise. But it’s a tool still and the technology means you need to think about skills, about utilisation about processes about voice, about the human in this loop ha-ha. I that’s, that’s important to get that in mind -
NC: But of course, it can be pretty disastrous when the human is not in the loop but Francis, I saw a story the other day. It was about an Uber driver who was told by automated HR to stop driving or be fired basically because he’d taken a very roundabout route. I mean in fact he had to go a different way because the river crossing was closed. Erm the AI systems had misinterpreted the data. The guy had to fight to keep his job. Now clearly, I know you’re going to say, well, you should check this information. You shouldn’t just let systems churn out results which affect people without checking them. But it does illustrate the fact that this is a mind thought for business -
FL: Yeah, and I do think it is – I sort of think, at the moment, there is almost this dialogue of broadly either is the cheating on the exams or the robots are stealing our jobs. Erm or it’s a toy to make [inaudible]. Where I think almost any example you could pull out would be true, but it would have also have been true before. And I suspect before there was any AI taxi drivers with threatened with firing for taking roundabout routes by human beings. So as ever that kind of thing if something looks odd the ability to investigate it and explore it and analyse it becomes much more and I think that’s – to what Kendra was talking about the really, the human skills we need are more important I think than ever. And that for me is one of the things I find the most interesting that we need to examine what kind of skills do we need to hire for. Do we need to build? Do we need to maintain? How do managers coach people through things? And that I think particularly as being on a CIPD podcast I think that sort of the work, part of the work for our profession.
NC: Okay, so Katie Obi, can you begin to talk us through the process of determining what parts of your business could benefit from a higher degree of AI? And then how you work it through particularly with regard to how it’s going to have an effect on the people under the skills that they’re going to need or not need indeed?
KO: Absolutely. And I think one of the things that is really important to do is effectively embrace the chaos. You know AI is here is being used by your people. it’s often being used by your people in ways that you can’t control as well as. So, getting ahead of it is one of the most important things that businesses can do at the moment. And finding out the right ways to be able to use this great technology in a way that you do have the right governance and controls over it is the best thing that you can do. So that people aren’t feeding your commercially sensitive information and IP to something that actually you have no idea what is done with that data. So, the first thing would be to really sit down as an organisation and think about how you want to incorporate AI in your organisation and how you want to handle some of the data privacy elements.
Then I think once you have a strategy in place in terms of how you’re going to think about adopting AI starting to pilot it is really important. I wouldn’t recommend any organisation right now does a wholescale rollout of AI without testing in different places. And I think there are a few different things that you test. You have a hypothesis you go and form a cross functional working group around it. I think it’s very important that this shouldn’t be something that is just run by your IT team. You need to have people from across the organisation with different perspectives running the pilot scene if your hypothesis has proved and then you can go about rolling that out further. I think also one of the most important things that is going to happen and Francis, your point about additional skills you need is we will start to move away from some of the roles that we currently have in organisations and need others. So, we will need to have more roles around governance. People who are thinking through what is being created here. Is it driving to the right results? Is it being used in the right way? But is it also, is it also biased in anyway?
And we were just on the topic Kendra raised around algorithms. Algorithms in some cases have been biased for many years and there is some really public examples of that out there. And they are biased because human beings are biased, and they’ve programmed them and then same with AI. AI is trained in a different way, but it’s trained on data that humans have created as well and with that comes missing data especially from diverse populations and biased information as well. So, somebody who really has the governance around making sure that we are validating that the results are being generated are fair and ethical is really important. Plus, also, we’re going to move to more of a type of environment for organisations where we’re doing more feeding of the learning models as well. So, this is about what data do we have across the organisation? What knowledge do we have? How do we keep training the models to make sure we are getting better and better results? So, thinking about how we structure teams that are capable of doing that, which I think is a very different thing from what we do at the moment. And then the final piece I would say is organisations should do a lot of training of leaders and managers to understand more about the technology, more about the things they have to look out for and more about what skills are needed in the organisation going forward so that you can get the right buy in and adoption because like every technology rollout organisational change management is really important to make sure we get the most and it’s used in the right way and it’s adopted.
NC: So, for what you’re saying it seems to me that the kind of people whose skills will be in demand might be those people who are actually good at collaborating with AI who are good at those prompts that are constantly asking better questions to get a better output. But of course, Kendra, there will be job implications won’t there? I mean AI can’t do everybody’s job but a month or so ago Goldman Sachs published a report showing that AI could replace the equivalent of 300 million full time jobs worldwide. PWC nearer home found a third of the respondents polled were worried about their role being replaced by technology and you’ve looked at automation. You know the in previous incarnations. What is your sense of how it’s going to play out and how we actually ensure that we don’t lose too many jobs and we actually improve the jobs that we do have?
KO: Yeah, thank you. No less than that that’s a big question of course because these global estimates are always prone to that many mirrors and they look pretty scary, but I think we have been through other phases of scaremongering and what is important I think a lesson we can learn from the last one is that not only can something happen such as the pandemic. Which also has not actually triggered that much of a tech increase than we had thought beforehand, right? The thing here is important that tasks if you reduce skills to tasks and you get rid of one task you are in danger of losing a skill because you might not be able to perform other tasks that make your job accurately in the future. What I mean here is think about a simple thing as learning other arithmetic in mathematics. So, there is evidence that students who never learned that at school actually never really engage with mathematics anymore because they just lose the foundations – it’s like using pen and paper.
NC: Children can’t use maps, can they, anymore?
KO: Exactly. So, we lose that – I know I am always happy to discuss this idea of GPS – you know the old word for that. So that will establish everyone, and I think that’s really critical to keep in mind and what Katie has said is I think really important. All that has been outlined here means to be really cautious about where do we want to go with that. Who has a say in that. And also I think there is some evidence here from researchers who did research in the Silicon Valley that if you get to a point where you have the idea of owning ethics within your own organisation you make the people who develop it in part responsible for acting ethical and then tensions come in because businesses as we know are run by performance management figures like APIs by technical solutionism by meritocracy. So, who has a say in what and how strong can you make them, right? We had the debate about whistle blowers, and we will see that again and just coming back here to what Francis said that would be for me the distinction between the human interaction. I will never agree that anyone in the world can be unbiased in their decision. You know you will always find a biased full stop because biased is a bit of a pointless solution, yeah. It’s a question in how have you thought about biased is and can the other person actually fight against it.
NC: So, to just sort of round off talking about the skills that will need to be required Francis – how do you think things will change – I mean, Kendra has almost implied we might have a kind of a wave of sort of jobs that are fairly low-level teaching stuff. Do you think that generalist jobs might go? We saw this hollowing out the last time that we had a big technological wave?
FL: There are so many examples and there is a piece that I’ve started to think a lot around where the promise of greater productivity for lots of people is huge. But if let’s say you are making pizzas – I will certainly not conceive – so there is a whole raft of roles in society that are not engaging with tech on a day-to-day basis, and I think on Kendra’s point I think the risk of kind of social difference there is even greater. The bit that I think has seen a real shift in skills is I think the – need to pay attention to the route in developing skills because it’s almost easier to get through say first stages of development of anything. Which is great if you already know how to do those but if you don’t then your ability to integrate, I think is challenging. And I think the sort of skills around managing and coaching people become really different and if for a moment I could go back to the 1990s I taught for a couple of years in Sri Lanka in the 1990s you know days of dial up internet and so on and three kids that I was teaching history to well, I was really impressed they managed to source from the internet a whole raft of information about the subject we were doing. What they didn’t do was then apply any judgment to it or put it in their own words you know so it was cut and pasted and stuck in.
So, the discussion I was having with them was hugely impressed they’d sourced things brilliantly but they’d then not applied the human judgment on top of it. And I think where we will need to get to in terms of skills is being able to say, okay, not did you use AI to help you but how did you use it, where did you get to with it, what were the questions you were using, what was the discussion you went through, where did you supplement it with your judgment? Who did you talk to, to back it up? So, you’re kind of getting people to build the rounded solution and really getting people to look at how do you use those human skills to get to good solutions. If we don’t do that my great worry, is we are just going to generate a load more stuff, you know, lots more content. Which then ultimately ends up with somebody having to make a decision about it and people who are already kind of bottlenecks in communities or just have more things that they pretend to read and pretend to make sounder judgements on.
NC: Katie Obi, I know you want to come in on that. But I was just thinking earlier that if we think about how maybe simpler version of AI were incorporated into customer services with chatbots and everything. I mean they’re pretty rubbish, weren’t they? The public hated them, and they weren’t achieving what companies wanted. I guess maybe they didn’t go through this process that you were discussing earlier of rigorous trialling and all that. So, I mean we may be setting ourselves up for a fall here with the next wave of AI.
KO: Yes, absolutely. And there is so much that I think we really need to unpack around this. I wanted to just maybe in defence of AI a little bit, I think Kendra and Francis raise really, really important points around the impact it can potentially have from a societal standpoint and different types of jobs. But I think there is also another side to that too. I think all things are true and all things need to be looked at very critically to make sure we are doing the right things and we’re using the technology responsibly. I also think AI can open a lot of doors as well. So, if I look at Chat GPT type generative AI technology that will disproportionately impact white collar workers. Now I think there are lots of things to unpack around that, around generally it doesn’t replace whole jobs it adds productivity to parts of jobs and that makes it difficult to say, well, 300 million jobs will be completely wiped out. It also creates more jobs in terms of the other things that need to be built around it to make sure that we are successful. I think also there are other forms of robotics that will impact blue collar jobs as well and all of these things are happening at the same time. I also think AI opens opportunities as well not just in new jobs that are created but if we look at GPT4 it can generate codes now. So, you don’t have to go to university and do a computer science degree and learn how to be a developer in order to generate code. Yes, it’s always worth having someone making sure that they’re checking that code and making sure that it’s been done correctly but doors are suddenly open to new industries and new opportunities for people like they’ve never been before. So, I just wanted to point out that there were two sides of the coin and without discounting the really important points that Kendra and Francis also raise that we need to make sure that we’re very careful about how we reset this technology wisely.
NC: Because people used to be valued for either thinking or doing, didn’t they, Kendra? And the doing has been largely or partly taken over by machines for thinking we thought the people would have to do. It’s almost like from what I’ve just been hearing there the boot might be on the other foot. That it’s the thinkers that might be in less demand and the doers particularly those with practical skills who are not being replaced by machines they will still have work.
KB: I think where I am coming from, I will always say the distinction between thinkers and doers is questionable in the first phase because I think a lot of doers – thinking about my own dad here ha. He’s a big doer but he’s also a big thinker.
NC: I was vastly over sympathising. If there is any enraged plumbers listening, I apologise.
KB: I think we are really listening to the case against, and I think we’re in this terrain where it is all important to remind ourselves that it’s all about where do we create the spaces – these discussions like ours here can take place? At the moment we have a fast accelerator tech development that comes as a promise that is never questioned. Now my first question is, how much do we still fiddle around before we go on a zoom call – oh, your camera isn’t here – we’re a couple of years in now and still it’s not working well. So, we know there is this ideal world where every technology works fantastically but instead the digital housekeeping, we need to do every day that’s part and parcel of our jobs where we say, oh, I sent you this. It’s in the Cloud X or it’s in Provider Z – we don’t know, right? So, what happens is that a lot of the productivity gains that are promised us are just lost because nothing really is implemented carefully enough so that we have always access to the same resources. And the other thing I wanted to address Katie; I totally agree there are real benefits – hence I would never say that something we discuss now is actually relatively new. I think a lot of these questions we had to ask for a while, so we are currently doing a project with engineers on how they perceive the shift was human centredness.
And it’s quite interesting because they learned modelling and engineering means you know, yeah, a model is always wrong but some of them are useful and then if you say, well, how about saying a model is often wrong but sometimes it’s useful and it’s right but it is also very dangerous. And then they start reflecting but they also admit that often they don’t even have the resources, or they are given a project and then they are driven by their own adventures. They just want to do something, you know, this idea of doing something new that we all know. You think, oh, that’s new, that’s great, that’s fantastic and then we forget where do we parcel that in? That would be my concern that our students the ability to ask the questions that are needed. I think that’s a big issue. Education yes, we will benefit from AI but what is what the benefit gains so that we can use these times. Or will our workloads increase, and stuff will cut away and then well, what have we won then? We have increased student numbers because AI will help us, or we will go back to a model where I have more time to spend with my students on a face-to-face learning. Or HR in each of their organisations can they actually talk to people again.
NC: Well, that’s a good point. Erm – is it HR that are in the driving seat for this Francis, in terms of pushing organisational change?
FL: It’s sort of funny, in HR covid presented this huge demand on the profession but the demand was effectively on operational HR largely. I think with AI it requires the profession to think really quite differently. I will say it’s a profession we love benchmarking what we do against each other. And you can’t do that with things that are erratically changing really, at the moment. You know keeping up with the herd when the herd hasn’t started moving is not really the answer. So, kind of benchmarking is quite tough. There is then things that hold us back like the way we’ve implemented technology into HR is tough and a lot of HR functions would struggle to get in the sort of technology pecking order and so kind of their ability to get into this is quite tough, I think. And I would love us to go to the profession and try and push on this, so we really do a number of things. So, one is look in as many ways as we possibly can throughout the colleague journey and really go where in this step of the colleague journey either can the individuals deploy AI to change things or the experts deploy AI or it’s deployed into the process.
And if I could just say one simple example everybody hates the kind of quarterly compliance learning modules. Every organisation has them we all hate them; we all click through PowerPoints slides as fast as we possibly can and then guess the answers. And if the aim is actually to make sure that people know 8 out of 10 things then giving people the ability to go and find that information for themselves in a faster manner and essentially have a conversation with something that will tell them the things that they need to know could be hugely transformational. It could save huge amount of time gets people to the things they really know. But it does completely disrupt a kind of suave of providers who have a kind of vested interest and that is true I would say every single step of the kind of colleague journey. So, I think our work as a professional is to get ahead of the link. How do we use this as well as possible. How do we encourage people to try AI to help them with productivity and then I think really importantly we use it to improve productivity and efficiency rather than just creating more stuff. Erm – the other great big people debate at the moment is really around things like the four-day week. If you take the potential of AI, it makes the possibility of four-day week far more kind of practical. That’s my optimistic – my pessimistic thing is we’re going to create so many things HR doesn’t get ahead of the curb we end up writing policies telling people they can’t use AI and then managing the generation of huge numbers and more documents of things that make us less efficient. I don’t think HR is at the vanguard of it but I think we need to because it is that use of AI has humans that I think is going to make a real difference.
NC: So, Katie Obi what would you add to that from your practical experience?
KO: Yes. I mean, clearly, I’m biased in this given the role that I do. But I feel really strongly that the most successful companies in the next five to ten years are the companies where the people functions get front and centre of this and they really embrace it upfront. And to Francis’ point the policy brigade going through this but they’re the ones who really think about, well, how do we do things differently? And one of the best things that people functions can do is work out what skills we need within our own functions in order to be able to do this well. And it’s skills like really being able to drive things like agile projects and the prototyping elements – it’s skills around governance and assessing outcomes and biases and ethics. And it’s also skills around data as well. So, I think if I look at people functions versus other functions in the business as much as it pains me to say it, I think that people functions are behind the curb of other areas of the business in terms of being data analytics led and we need to become ahead of the curb to do this. We have to work out how we’re feeding all of the right data, how it all feeds together in order to get insights and in order to be able to take advantage of these technologies and it’s a completely different skillset that we have in people functions. And I think if we can look at building those capabilities within our teams and combining it with the wonderful capabilities, we already have especially around been human centric and thinking about people’s experience then that is where I think the magic is and where I think companies will be most successful.
NC: And I was just wondering Kendra Briken from what you were saying earlier about do you actually need a new AI solution. What process do you think people should go through to decide if they should AI or not AI?
KB: Maybe I’m picking it up from the end of the stick that Katie handed over there because I think that the skills are needed in HR – I mean we’re teaching here at Strathclyde in the business school we are teaching masters programmes for AI so that’s the future of the profession it’s CIPD accredited and everything and what strikes me in teaching the students that come from across all different countries there is a lack of knowledge not necessarily with regards to data but what does the data signify? So, the knowledge about the simple social reality of work and employment is quite thin. To give you an example we did a project on the rollout of technologies during covid in social care. And the HR managers we spoke to they were kind of upset because they said, well, everybody knows how to use their phone at home why can they not use it to type in care notes to do this and that and that. So, all of a sudden, this smart device became the all. But they totally dismiss the social reality of social workers going into the home of a person and interacting their - you know wiping many things and doing things. To me this is something we need to consider when thinking what knowledge about data do we want and I think data here means it all needs to be the expert in knowing what work is done by the employees in the organisation and they need to know about that and not just by surveying it to create yet another data point, right? What you said about how do we make AI relevant, I think Katie gave us all the right points here earlier on. It’s about testing, it’s about resources and when we go back to what Francis said about fears I just fear from what I see. Tech is thrown into things because there are too many CEOs now thinking, oh, we need that, right? We need to have that. The question will always be do organisations think about it carefully when to join the bandwagon – are they prepared, do they have the resources, how do you manage the knowledge that is created. How do you make sure that the databases are up to date and all of that. These are all costs that might incur that are hidden at the moment and I still love this, I mean love as in a hate, love relation and the there was talks in 2007 I think about artificial artificial intelligence. Think about how many jobs you might simply create because people need to do all the analytic work and make it look as if it’s all running smoothly, right?
NC: Okay, so bringing this to a conclusion. Francis, you’ve touched on some of these issues. But just give us a quick tip or two about how you just start this process and with whom to reshape and reimagine your business models to incorporate AI technology?
FL: I sort of think there is two approaches. And I think Katie was great at articulating the almost the enterprise view. We know loads of workers are using ChatGPT or other tools and they’re kind of trying to do it in a hidden way in case they get into trouble or get told to stop.
NC: Or they’re getting it to write poetry.
FL: Yeah, yeah -
NC: Trivial – relatively trivial -
FL: Yeah, so there is a piece of making it okay, showing we’re paying attention, showing we’re interested in it and giving people who are interested the opportunity to show you know kind of show their skills and so on. Doesn’t mean you then follow through on everything that is discovered. There maybe things that people go, we want to do this, and you go, no, that’s proprietary data or whatever. So, it’s start but it’s giving people that kind of encouragement to explore and the other thing we were trying to do – Kendra said earlier about this, you’re not doing it from a position of power you’re doing it from a position of curiosity or exploration. So, it’s a kind of slightly different approach and some of that reflects a bit of my philosophy that I often think companies either behave like you can trust your employees or they behave like you can’t trust your employees. And generally, I think whichever one you adopt is true.
NC: We won’t get into this discussion now whether you can always trust GPT, but it will give you some options. Thank you, Francis Lake, Doctor Kendra Briken, Katie Obi such a thoughtful and times mindboggling discussion. I’m going to have to have a lie down after this one. And remember we have these kind of conversations every month here for instance if you haven’t caught it yet our previous edition on pay transparency generated a shed load of likes and shares. So please check us out and subscribe your podcast but since we’ve been all about AI let’s hear how GPT4 responded when I asked it for a strategy to boost AI use.
AI: That’s a great question. A good strategy involves identifying potential use cases that can solve your key challenges or create value for your stakeholders, prioritise your use cases on criteria such as budget, data availability and expected benefits then you can develop a road map for implementation.
NC: Until next time from me and my new AI friend.
AI: Goodbye.