Building an AI-Ready Finance Team Without Losing the Human Side, with Tariq Munir
The Diary of a CFOFebruary 12, 202600:49:53

Building an AI-Ready Finance Team Without Losing the Human Side, with Tariq Munir

In this episode of The Diary of a CFO Podcast, host Wassia Kamon sits down with Tariq Munir, digital transformation advisor and author of Reimagine Finance, to explore what it actually takes to build an AI-ready finance team.

In this episode of The Diary of a CFO Podcast, host Wassia Kamon sits down with Tariq Munir, digital transformation advisor and author of Reimagine Finance, to explore what it actually takes to build an AI-ready finance team.

Tariq shares why the biggest barriers to AI adoption are behavioral, not technological. He explains how to identify whether a team is truly data-driven, why streamlining workflows must come before automation, and how CFOs can create a culture of thoughtful experimentation without compromising accuracy or compliance. They also discuss the emerging skills finance leaders need, including change management, emotional intelligence, and responsible AI governance.

This episode offers clarity for CFOs and senior finance leaders navigating transformation, complexity, and the evolving expectations of the role.

Key Takeaways

  • AI Readiness Starts with Trust in Data
  • Process Optimization Comes First
  • Experimentation Requires Boundaries
  • Growing a Digital Mindset Across Teams is Essential

Key Timestamps

00:00:00 Welcome

01:22:00 The one litmus test that shows if your team is AI-ready

04:07:00 What broken workflows look like and why AI makes them worse

09:03:00 Earning the right to experiment in finance

13:00:00 Why most AI projects fail to deliver value

20:06:00 How to build a digital mindset across your team

24:14:00 The skills future CFOs need to develop now

25:40:00 Why CFOs are positioned to lead responsible and ethical AI

👉 If becoming a CFO is in your 5-year plan, get your free CFO Readiness Scorecard here:

http://thecfo.scoreapp.com

📬 Get Involved

Have a question or topic suggestion?

Email: Ask@thediaryofacfo.com

Visit: https://www.thediaryofacfo.com

đź”— Connect with Guest Tariq Munir:

LinkedIn: https://www.linkedin.com/in/tariq-munir/

đź”— Connect with Host Wassia Kamon on

LinkedIn: https://www.linkedin.com/in/wassiakamon/


00:00:00 --> 00:00:02 Hello and welcome to the Diary of a CFO podcast.
00:00:02 --> 00:00:05 I'm your host with Wassia Kamon. I'm a CFO with
00:00:05 --> 00:00:07 a background in accounting, FBNA, and I started
00:00:07 --> 00:00:10 this show to talk about what leading in finance
00:00:10 --> 00:00:12 really looks like and what it takes to become
00:00:12 --> 00:00:14 a CFO. Each week, we explore how today's top
00:00:14 --> 00:00:16 finance leaders build high -performing teams,
00:00:16 --> 00:00:18 partner with CEOs and boards, and lead through
00:00:18 --> 00:00:21 growth and transformation without burning out
00:00:21 --> 00:00:23 in the process. Today, I'm super delighted to
00:00:23 --> 00:00:26 have with me Tariq Munir. He is a keynote speaker
00:00:26 --> 00:00:29 and digital transformation and AI advisor. with
00:00:29 --> 00:00:32 over 20 years of experience guiding Fortune 500
00:00:32 --> 00:00:35 leaders through organizational change. He's also
00:00:35 --> 00:00:37 a LinkedIn top voice and the author of Reimagine
00:00:37 --> 00:00:40 Finance. It's a book that explores how finance
00:00:40 --> 00:00:43 leaders can navigate disruption while amplifying
00:00:43 --> 00:00:46 uniquely human capabilities. Welcome to the show,
00:00:46 --> 00:00:48 Derek. Thank you, Vasya, for having me. Super
00:00:48 --> 00:00:51 excited to be here. Oh, same here, because I
00:00:51 --> 00:00:53 really want us to be able to talk about how to
00:00:53 --> 00:00:57 build an AI finance, AI ready finance team in
00:00:57 --> 00:00:59 12 months without losing that human side, which
00:00:59 --> 00:01:01 is really what I really liked about the angle
00:01:01 --> 00:01:03 you took in your book. So I'm curious to understand
00:01:03 --> 00:01:07 from you when you say AI ready finance team,
00:01:07 --> 00:01:10 what does that actually look like in reality
00:01:10 --> 00:01:13 day to day? So a great question was, yeah. And
00:01:13 --> 00:01:16 well, you know, theoretically, We can do a whole
00:01:16 --> 00:01:19 lot of readiness assessments. We can do all those
00:01:19 --> 00:01:24 pre -assignment or pre -fillout checklists, do
00:01:24 --> 00:01:28 all those sort of questionnaires. However, I
00:01:28 --> 00:01:31 always say that look at just one day in practice
00:01:31 --> 00:01:35 that can actually give you a good litmus test
00:01:35 --> 00:01:40 if you are ready to have AI in your business
00:01:40 --> 00:01:44 or not. And that one litmus test is understanding
00:01:45 --> 00:01:47 or evaluating how are you making your decisions
00:01:47 --> 00:01:53 today. If data is just another chart on a PowerPoint
00:01:53 --> 00:01:56 deck, and that is it, and does not fundamentally
00:01:56 --> 00:01:59 drive how you are making decisions, I'm sorry,
00:01:59 --> 00:02:03 you are not ready. Most of us think that once
00:02:03 --> 00:02:06 we get AI, we will become data driven because
00:02:06 --> 00:02:09 then we'll have more insights, faster insights,
00:02:09 --> 00:02:11 and so on and so forth. It is in fact the other
00:02:11 --> 00:02:16 way around. To really leverage AI, we need to
00:02:16 --> 00:02:18 have a data -driven mindset. Because at the end
00:02:18 --> 00:02:20 of the day, data -driven is not a technology
00:02:20 --> 00:02:24 issue. It's a mindset thing. We need to be able
00:02:24 --> 00:02:27 to use the output of machines, rely on the output
00:02:27 --> 00:02:30 of the machine. And to do that, we need to be
00:02:30 --> 00:02:33 able to have that courage to take a decision
00:02:33 --> 00:02:37 based on data. Okay, while our gut or instinct
00:02:37 --> 00:02:40 tells us otherwise because a lot of times we
00:02:40 --> 00:02:42 go with our gut We go with our instinct and I
00:02:42 --> 00:02:43 don't say that, you know, don't go with your
00:02:43 --> 00:02:45 gut or instinct. That's what make us you makes
00:02:45 --> 00:02:49 us uniquely human though but Relying on data
00:02:49 --> 00:02:51 when data is telling us an entirely different
00:02:51 --> 00:02:55 story is what makes us ready to go for any technology
00:02:55 --> 00:02:58 for that matter Oh, I would like to see how it
00:02:58 --> 00:03:01 works in real life, right? So let's say you walk
00:03:01 --> 00:03:03 into a finance department today. What are the
00:03:03 --> 00:03:05 signal that will tell you they're not ready for
00:03:05 --> 00:03:08 AI, right? And that's even if, like you said,
00:03:08 --> 00:03:10 the technology out there. So I walk into a finance
00:03:10 --> 00:03:13 team and what tells you that they're not data
00:03:13 --> 00:03:17 driven enough already to be ready for AI? So
00:03:17 --> 00:03:20 as I mentioned, Vasya, if most of the decisions
00:03:20 --> 00:03:24 are still based on gut feel or the person, instead
00:03:24 --> 00:03:28 of the merit or what the data is telling us,
00:03:28 --> 00:03:32 they are based on the hierarchy of decision -making
00:03:32 --> 00:03:36 or hierarchy of people in the organization, we're
00:03:36 --> 00:03:39 not data -driven then. We are more people -driven.
00:03:40 --> 00:03:42 Not sorry, people -driven, I would say we are
00:03:42 --> 00:03:48 more... guided by the instinct and gut and our
00:03:48 --> 00:03:52 experience as opposed to what data is telling
00:03:52 --> 00:03:55 us. Apart from that, I would like to point out
00:03:55 --> 00:03:58 two more things. One is, of course, being data
00:03:58 --> 00:04:01 -driven. Secondly, it's about workflows being
00:04:01 --> 00:04:05 too complex. If I walk into a finance function
00:04:05 --> 00:04:07 and their processes are broken, their processes
00:04:07 --> 00:04:10 are not supporting automation, there are too
00:04:10 --> 00:04:15 many manual workarounds, Manual workarounds is
00:04:15 --> 00:04:19 not a bad thing per se, but doing it in the form
00:04:19 --> 00:04:23 of something like creating a manual workaround
00:04:23 --> 00:04:28 to overcome some system issue is what I'm talking
00:04:28 --> 00:04:30 about. So if those workflows are too complex,
00:04:31 --> 00:04:34 all we are going to do using AI is amplify that
00:04:34 --> 00:04:40 complexity. So I always start off with streamlining
00:04:40 --> 00:04:44 those workflows. And lastly, a very important
00:04:44 --> 00:04:47 element I always recommend and advise my clients
00:04:47 --> 00:04:51 around is that if people are hesitant to experiment,
00:04:52 --> 00:04:53 everything needs to be safe. Everything needs
00:04:53 --> 00:04:56 to be risk -averse. Being finance, we are risk
00:04:56 --> 00:05:00 -averse. That's how we are trained, right? But
00:05:00 --> 00:05:03 AI on the other hand requires some experimentation.
00:05:03 --> 00:05:05 It's not a kind of an implementation which will
00:05:05 --> 00:05:07 happen like an ERP or any other cloud or SAS
00:05:07 --> 00:05:10 solution where there'll be clear go, no go, no
00:05:10 --> 00:05:13 go decisions. And you will always have an output
00:05:13 --> 00:05:16 which is not very accurate. Then you will retrain
00:05:16 --> 00:05:18 your models to get more accurate output and so
00:05:18 --> 00:05:22 on and so forth. So you need to have that experimentation
00:05:22 --> 00:05:24 mindset. Now these are the things which I'm talking
00:05:24 --> 00:05:27 about right now does not really need technology
00:05:27 --> 00:05:29 at all. as you would have noticed, right? These
00:05:29 --> 00:05:32 are mostly mindset -driven. These are mostly
00:05:32 --> 00:05:34 behavior -driven things, which would actually
00:05:34 --> 00:05:38 drive and get you ready to leverage AI. And I
00:05:38 --> 00:05:40 also don't say that you do all of these things
00:05:40 --> 00:05:43 and then go for AI. Of course, that's also not
00:05:43 --> 00:05:46 practically possible. But starting to realize
00:05:46 --> 00:05:49 and starting to acknowledge that we need to change
00:05:49 --> 00:05:52 is the first step. And then working on this while
00:05:52 --> 00:05:54 you are, of course, working on your AI and digital
00:05:54 --> 00:05:57 roadmaps as well is what sets us up for success.
00:05:58 --> 00:06:00 Okay. So if I understand correctly, we start
00:06:00 --> 00:06:03 with the idea of we need to be more data -driven.
00:06:03 --> 00:06:06 So let's say I'm running a forecast and for the
00:06:06 --> 00:06:09 past three years, I see that sales have been
00:06:09 --> 00:06:11 increasing at 5%. If it's data -driven, when
00:06:11 --> 00:06:14 I'm looking at my next year, I should, you know,
00:06:14 --> 00:06:18 also feel like it's going up. for example, but
00:06:18 --> 00:06:20 let's say the marketing director comes in and
00:06:20 --> 00:06:23 say, oh no, let's lower it, right? So now I'm
00:06:23 --> 00:06:26 almost not as data driven, but now it gets into
00:06:26 --> 00:06:28 the politics. Is that correct? Yeah, yeah, yeah.
00:06:29 --> 00:06:31 You are absolutely, you are absolutely nailing
00:06:31 --> 00:06:34 it. And that's where the biggest issue comes
00:06:34 --> 00:06:37 in as well, right? I mean, if market is sending
00:06:37 --> 00:06:40 us different signals, But because of our sales,
00:06:41 --> 00:06:43 our marketing directors based on their, or even
00:06:43 --> 00:06:45 CFOs for that matter, based on their experience,
00:06:45 --> 00:06:48 they believe that this is something which is
00:06:48 --> 00:06:50 temporary or, you know, it's fine. We'll be able
00:06:50 --> 00:06:52 to get over with that. Or we have done that in
00:06:52 --> 00:06:54 the past and we know how it's going to work.
00:06:55 --> 00:06:58 So if we ignore what data is telling us, then
00:06:58 --> 00:07:00 of course we are not data driven. Having said
00:07:00 --> 00:07:02 that, there is always that balance between the
00:07:02 --> 00:07:05 two. Yes, as a human, we must use our experience.
00:07:05 --> 00:07:07 This is what makes us at the end of the uniquely
00:07:07 --> 00:07:09 human. What I'm saying is that if data is telling
00:07:09 --> 00:07:12 us an entirely different story and our gut instinct
00:07:12 --> 00:07:14 is telling us an entirely different story, we
00:07:14 --> 00:07:17 need to really critically evaluate that situation
00:07:17 --> 00:07:19 as opposed to just, you know, saying data is
00:07:19 --> 00:07:21 fine, data is not right. Or, you know, most of
00:07:21 --> 00:07:23 the time you might have heard in a boardroom
00:07:23 --> 00:07:26 or in those those planning meetings, oh, I don't
00:07:26 --> 00:07:28 trust the number, I don't know if this number
00:07:28 --> 00:07:31 is correct or not, and so on and so forth. So
00:07:31 --> 00:07:32 again, there are a lot of different elements
00:07:32 --> 00:07:35 associated with building that trust as well.
00:07:36 --> 00:07:38 But inherently not trusting data or inherently
00:07:38 --> 00:07:43 having that outset attitude towards not relying
00:07:43 --> 00:07:48 on data for decision making is how we drop the
00:07:48 --> 00:07:51 ball on being data driven. Okay, and then I understand
00:07:51 --> 00:07:54 the part about the processes where you have to
00:07:54 --> 00:07:57 do patchwork So if you used to do it that I can
00:07:57 --> 00:08:00 imagine how it will be hard to automate those
00:08:00 --> 00:08:03 kind of things with technology, right? You have
00:08:03 --> 00:08:05 to fix how you're doing things first. But then
00:08:05 --> 00:08:08 when you talk about that mindset of experimenting,
00:08:08 --> 00:08:11 like you said, in finance, it's hard to just
00:08:11 --> 00:08:14 experiment because it's like compliance is reporting
00:08:14 --> 00:08:17 to the board. If there is an hallucination or
00:08:17 --> 00:08:20 something like that, like, how do you overcome
00:08:20 --> 00:08:24 that? How do you prepare that mindset to be willing
00:08:24 --> 00:08:28 to experiment and start moving toward AI readiness
00:08:28 --> 00:08:31 in finance? Great question, Vasia. And whenever
00:08:31 --> 00:08:35 I say experimentation, we need to step back a
00:08:35 --> 00:08:39 little bit. OK. Experimentation is one thing
00:08:39 --> 00:08:42 and making mistakes is another thing. Right.
00:08:43 --> 00:08:47 If we are. We are trying to do something say
00:08:47 --> 00:08:50 for example I get a chat dpt or some enterprise
00:08:50 --> 00:08:54 version and I'm just using it to it to as an
00:08:54 --> 00:08:57 experiment in my mind I am just using it to create
00:08:57 --> 00:08:59 a board report and then I just send it through
00:08:59 --> 00:09:03 to To board of directors. That's not an experiment.
00:09:03 --> 00:09:08 That's a mistake Right. So so so we need to very
00:09:08 --> 00:09:11 clearly understand that difference between experimentation
00:09:11 --> 00:09:15 and mistakes number one number two Experimentation
00:09:15 --> 00:09:18 is not something we just do like, I mean, we're
00:09:18 --> 00:09:20 not sitting in a lab and we are just trying to,
00:09:20 --> 00:09:23 even scientists don't do experiments by that,
00:09:23 --> 00:09:25 right? By that, that they just start combining
00:09:25 --> 00:09:27 two things and think about, okay, something comes
00:09:27 --> 00:09:29 up. Combine tomatoes and oranges. Yeah, right,
00:09:29 --> 00:09:34 something comes up. Experimentation, what I always
00:09:34 --> 00:09:37 say that you need to earn the right to experiment.
00:09:38 --> 00:09:41 And how you do that is by clearly understanding
00:09:41 --> 00:09:44 first of all, what are your clear business needs?
00:09:44 --> 00:09:46 Number one, what are the problems that you are
00:09:46 --> 00:09:49 trying to solve and then finding a different
00:09:49 --> 00:09:52 solutions against those problems and then based
00:09:52 --> 00:09:56 on the impact value complexity, identifying those
00:09:56 --> 00:09:59 top two, three candidates or one candidate where
00:09:59 --> 00:10:03 you can actually go. in and start to build those
00:10:03 --> 00:10:05 experimentation, build those experiments and
00:10:05 --> 00:10:08 do experiments around that. It does not mean
00:10:08 --> 00:10:13 that you just go in blindly and start using AI
00:10:13 --> 00:10:16 to create output and then in the name of experimentation,
00:10:16 --> 00:10:19 you are submitting a wrong reports for the compliance
00:10:19 --> 00:10:23 or doing doing other or exposing organization
00:10:23 --> 00:10:26 to the risks. It should not be exposed. So that's
00:10:26 --> 00:10:28 that's not experimentation again, you know, that's
00:10:28 --> 00:10:31 that's not not not how we do it. We need to be
00:10:31 --> 00:10:34 thoughtful about our experiments. And then there
00:10:34 --> 00:10:37 are certain core elements. So you need to do
00:10:37 --> 00:10:39 your text submissions, for example. You need
00:10:39 --> 00:10:43 to do your SEC submissions. There is no room
00:10:43 --> 00:10:47 for us to experiment on the output. So output
00:10:47 --> 00:10:50 needs to be clear. But how we generate the input,
00:10:50 --> 00:10:53 we can always experiment. We can always try to
00:10:53 --> 00:10:55 find different ways or better ways of doing things.
00:10:55 --> 00:11:00 But that does not give me any right. to not validate
00:11:00 --> 00:11:02 or verify the output and just send it through
00:11:02 --> 00:11:11 to the board or to the regulator. So experimentation
00:11:11 --> 00:11:14 needs to be thoughtful, needs to be very clearly
00:11:14 --> 00:11:18 tied into your specific business need. It's not
00:11:18 --> 00:11:21 about that just because we want to use a native
00:11:21 --> 00:11:25 AI somehow. So let's do an experiment around
00:11:25 --> 00:11:27 that. That's not how we do experimentation. So
00:11:27 --> 00:11:30 that needs to be that distinction needs to be
00:11:30 --> 00:11:33 very, very clear. And I agree. I mean, sometimes
00:11:33 --> 00:11:36 we take that as our experimentation as a wrong
00:11:36 --> 00:11:39 in a wrong connotation and think about it more
00:11:39 --> 00:11:43 like, in fact, what we are doing is not experiments.
00:11:43 --> 00:11:45 What we are doing are making mistakes. So we
00:11:45 --> 00:11:50 need to make sure that we bifurcate and we separate
00:11:50 --> 00:11:53 the two. I hope that explains a bit of that question.
00:11:53 --> 00:11:55 Oh yeah, absolutely. And I'm curious to hear
00:11:55 --> 00:11:58 from your work, what have been some of those
00:11:58 --> 00:12:00 mistakes that you have seen? Because clearly
00:12:00 --> 00:12:02 one of them is just not double checking the work.
00:12:02 --> 00:12:04 Like we do it in accounting, internal controls,
00:12:05 --> 00:12:07 right? View the work or you sign a journal entry.
00:12:07 --> 00:12:09 But what are other mistakes that you've seen
00:12:09 --> 00:12:12 people make as they're trying to be ready for
00:12:12 --> 00:12:17 AI or actually implementing AI within their organizations?
00:12:18 --> 00:12:22 I think the biggest mistake and sometimes I call
00:12:22 --> 00:12:26 it bit of a pandemic kind of a situation is becoming
00:12:26 --> 00:12:31 too much tool or AI focused, to be honest, and
00:12:31 --> 00:12:33 not actually thinking about the business problem.
00:12:34 --> 00:12:36 The very essence of experimentation, what I was
00:12:36 --> 00:12:38 talking about, if we are ignoring that, that's
00:12:38 --> 00:12:41 the biggest mistake in itself. So what happens
00:12:41 --> 00:12:46 in various scenarios? I have seen that. We start
00:12:46 --> 00:12:49 the conversation, so I approach or someone approaches
00:12:49 --> 00:12:53 me around AI transformation or this kind of work
00:12:53 --> 00:12:57 that I do. Many times the conversation unfortunately
00:12:57 --> 00:12:59 starts with, how can I automate this task? How
00:12:59 --> 00:13:02 can I use AI to do this thing better? That's
00:13:02 --> 00:13:05 a wrong question to ask. that I would sometimes
00:13:05 --> 00:13:07 say as close as doing a mistake because what
00:13:07 --> 00:13:11 we do is then when we start finding point solutions
00:13:11 --> 00:13:14 which might give us some productivity benefits
00:13:14 --> 00:13:17 or might you know automate some of the task but
00:13:17 --> 00:13:20 at that very specific boundary within which that
00:13:20 --> 00:13:22 AI would be operating. AI's real value comes
00:13:22 --> 00:13:25 in when it integrates across the organization,
00:13:25 --> 00:13:28 when it fits into your broader vision, your broader
00:13:28 --> 00:13:32 organizational operating model for that matter.
00:13:32 --> 00:13:34 That is, I believe, the biggest mistake that
00:13:34 --> 00:13:41 we make when we are. I have seen AI projects
00:13:41 --> 00:13:45 ranging from cash flow forecasting to revenue
00:13:45 --> 00:13:49 analytics to process optimizations not performing
00:13:49 --> 00:13:53 well or not doing as planned because there was
00:13:53 --> 00:13:56 nothing wrong with the approach. There was nothing
00:13:56 --> 00:13:59 wrong with the tool or the technology itself
00:13:59 --> 00:14:05 or its its appropriateness for that problem,
00:14:06 --> 00:14:09 but it didn't fit into the broader organizational
00:14:09 --> 00:14:10 operating model or the broader organizational
00:14:10 --> 00:14:14 structure. People did not rely on the output.
00:14:14 --> 00:14:16 At the end of the day, people did not adopt the
00:14:16 --> 00:14:18 technology. I mean, having a tool and then using
00:14:18 --> 00:14:21 that tool are two different things, right? I
00:14:21 --> 00:14:25 mean, adopting and deployment. So a lot of times,
00:14:25 --> 00:14:28 these are the kind of, I would say, mistakes
00:14:28 --> 00:14:32 or or not the right way of doing experimentation
00:14:32 --> 00:14:34 that I have witnessed quite a few times. Quite
00:14:34 --> 00:14:37 a few times. Sometimes I sound more like a broken
00:14:37 --> 00:14:39 record as well when I keep telling people that,
00:14:39 --> 00:14:43 you know, please let's be a little bit more thoughtful
00:14:43 --> 00:14:45 around our experimentation. Let's not talk about
00:14:45 --> 00:14:48 tool first. Let's not talk about the task even.
00:14:48 --> 00:14:51 Let's look at the holistic system. What are we
00:14:51 --> 00:14:53 doing, right? I mean, how our work is impacting.
00:14:54 --> 00:14:57 At the end of the day, remember, finance is up.
00:14:58 --> 00:15:01 is a function of every other function in the
00:15:01 --> 00:15:06 organization. We cannot say that even the basic
00:15:06 --> 00:15:08 accounting work that we are doing can happen
00:15:08 --> 00:15:12 in isolation without the input from commercials,
00:15:12 --> 00:15:15 from operations, from cost of goods sold, for
00:15:15 --> 00:15:18 sales from the sales team. It cannot happen.
00:15:18 --> 00:15:22 How that linkage, how that end -to -end workflow,
00:15:22 --> 00:15:25 the end -to -end processes work in order for
00:15:25 --> 00:15:29 us to get the maximum utilization of AI or the
00:15:29 --> 00:15:33 maximum leverage out of AI. So that is what I
00:15:33 --> 00:15:37 see a lot in the work that I do. OK. And so I'm
00:15:37 --> 00:15:39 curious too, in your work and your book, you
00:15:39 --> 00:15:43 talk a lot about human centric enabled finance.
00:15:43 --> 00:15:47 So what needs to change first in how finance
00:15:47 --> 00:15:51 people think or behave for AI to work in our
00:15:51 --> 00:15:53 favor? Like, what do we need to change? Because
00:15:53 --> 00:15:55 I know you said the first question sometimes
00:15:55 --> 00:15:59 is, how can I use it to automate this? Well,
00:15:59 --> 00:16:01 I'm trying to make my life easy, so I'll likely
00:16:01 --> 00:16:04 ask the same question. So what is a better question
00:16:04 --> 00:16:07 to ask? What actually needs to change for finance
00:16:07 --> 00:16:10 to be more ready for AI? I believe the fundamental
00:16:10 --> 00:16:14 change which is needed in any finance function
00:16:14 --> 00:16:17 or for that matter, I mean broader organization
00:16:17 --> 00:16:19 because I do work with broader organization as
00:16:19 --> 00:16:23 well. So is how do we build a digital mindset
00:16:23 --> 00:16:26 within the team? Now, again, as I said, I sometimes
00:16:26 --> 00:16:29 sound like a broken record when I say that digital
00:16:29 --> 00:16:33 mindset is a very simple thing. It is how well
00:16:33 --> 00:16:36 the finance team or business team is able to
00:16:36 --> 00:16:40 utilize the output of machine to solve a real
00:16:40 --> 00:16:42 world business problem. That is it. As long as
00:16:42 --> 00:16:44 you are able to connect the dots between the
00:16:44 --> 00:16:47 two, you have a digital mindset and what that
00:16:47 --> 00:16:51 enables us to become what we are supposed to
00:16:51 --> 00:16:53 do. We become better business partner and we
00:16:53 --> 00:16:56 actually navigate the ship as opposed to just
00:16:56 --> 00:16:59 being those stewards of the data. When we build
00:16:59 --> 00:17:03 that digital mindset, two things happen. One,
00:17:04 --> 00:17:07 by default, we get process efficiencies, faster
00:17:07 --> 00:17:09 insights, of course, because now you are able
00:17:09 --> 00:17:12 to connect that what are the different business
00:17:12 --> 00:17:15 problems I'm facing. And how do I use technology
00:17:15 --> 00:17:18 holistically to solve those business problems?
00:17:19 --> 00:17:23 So as an output, you get process efficiencies,
00:17:24 --> 00:17:26 you get productivity, you get faster insight.
00:17:26 --> 00:17:30 And what that does, secondly, and most important,
00:17:30 --> 00:17:36 I would say, this enables us to do what we are
00:17:36 --> 00:17:39 designed to do as humans, what we are meant to
00:17:39 --> 00:17:42 do. critical thinking business partnering and
00:17:42 --> 00:17:45 not that manual tedious manual work that we are
00:17:45 --> 00:17:48 we are we we spend so much time on so as as humans
00:17:48 --> 00:17:51 I think we can do much better than what we are
00:17:51 --> 00:17:53 doing today as finance leaders we can do much
00:17:53 --> 00:17:56 better than how we are we are operate making
00:17:56 --> 00:17:59 our business plans or how we are doing our back
00:17:59 --> 00:18:02 office accounting for that matter or how we are
00:18:02 --> 00:18:05 paying invoices to our suppliers and so on and
00:18:05 --> 00:18:07 so forth. We can do a much, much better job.
00:18:07 --> 00:18:08 I'm not saying that we're not doing a great job.
00:18:09 --> 00:18:12 I mean, finance is one of the most intelligent
00:18:12 --> 00:18:14 teams in the organization, and they are one of
00:18:14 --> 00:18:17 the most hardworking as well. But how do we make
00:18:17 --> 00:18:21 them a little bit more smart workers using technology?
00:18:21 --> 00:18:25 And this is what helps us then build a human
00:18:25 --> 00:18:29 -centric tech -enabled finance function. And
00:18:29 --> 00:18:32 so building that digital mindset, when you think
00:18:32 --> 00:18:35 about building that digital mindset, what a practical
00:18:35 --> 00:18:38 step to get there, right? Like, because I know
00:18:38 --> 00:18:42 it's a bit out there. I see the bigger picture.
00:18:42 --> 00:18:44 But for somebody listening, I want to build a
00:18:44 --> 00:18:47 digital mindset within my team. As a leader,
00:18:47 --> 00:18:50 what do I need to do? And what does my team need
00:18:50 --> 00:18:52 to do? What do I need to expose them to in order
00:18:52 --> 00:18:56 to really elevate that mindset? Great question.
00:18:56 --> 00:18:59 And I actually talk in quite a detail in my book
00:18:59 --> 00:19:02 as well, specifically on this building a digital
00:19:02 --> 00:19:04 mindset. So my argument is that we need to build
00:19:04 --> 00:19:08 two things. First of all, is your digital literacy,
00:19:08 --> 00:19:10 of course, right? Now, that does not mean that
00:19:10 --> 00:19:12 you need to become a data scientist or you need
00:19:12 --> 00:19:15 to become an engineer unless you want to, of
00:19:15 --> 00:19:19 course. But you need to understand that how different
00:19:19 --> 00:19:23 technologies that exist today, what can we Use
00:19:23 --> 00:19:25 to solve some of the business problems today.
00:19:25 --> 00:19:28 What are the trends? Read a lot of books. I always
00:19:28 --> 00:19:30 recommend that there is no substitute to reading
00:19:30 --> 00:19:34 good books. Understand there are a lot of free
00:19:34 --> 00:19:36 courses available out there where you can just
00:19:36 --> 00:19:39 get an understanding of what AI is all about,
00:19:39 --> 00:19:41 how it can help us solve some of those pressing
00:19:41 --> 00:19:44 issues that we face today. What are the different?
00:19:45 --> 00:19:48 different methods within data analytics that
00:19:48 --> 00:19:51 we can apply and use them, and depending upon,
00:19:51 --> 00:19:53 of course, where you are in your hierarchy in
00:19:53 --> 00:19:57 the organization, number one. And number two,
00:19:57 --> 00:19:59 by building a digital mindset requires us to
00:19:59 --> 00:20:03 have a growth mindset. So it is almost like a
00:20:03 --> 00:20:05 combination of the two, digital literacy and
00:20:05 --> 00:20:07 growth mindset gives you a digital mindset. Growth
00:20:07 --> 00:20:12 mindset means that as a human, and that applies
00:20:12 --> 00:20:16 of course to both divided teams, but sometimes
00:20:16 --> 00:20:19 I say that it applies more to the leaders in
00:20:19 --> 00:20:22 the organization because sometimes that's where
00:20:22 --> 00:20:25 people look up to and follow the lead and follow
00:20:25 --> 00:20:29 their footsteps. If as leaders we have a growth
00:20:29 --> 00:20:32 mindset, we believe that as humans we can learn
00:20:32 --> 00:20:40 any skill, there is no limit. to human mind on
00:20:40 --> 00:20:43 what can be taught or what can be learned, we
00:20:43 --> 00:20:45 build that growth mindset and that requires us
00:20:45 --> 00:20:48 to do continuous learning, understand different
00:20:48 --> 00:20:52 aspects. And first of all, be very open and acknowledging
00:20:52 --> 00:20:55 of the fact that, yes, I mean, I don't have answers
00:20:55 --> 00:20:58 to everything, but I'm willing to go out and
00:20:58 --> 00:21:01 I'm willing to answer, to find those answers.
00:21:01 --> 00:21:05 When you create that kind of a culture as a leader,
00:21:05 --> 00:21:08 within your team, your team follows through your
00:21:08 --> 00:21:12 footsteps and then they also work towards finding
00:21:12 --> 00:21:13 those answers. And when you are trying to find
00:21:13 --> 00:21:16 those answers, you will, again, I mean, there
00:21:16 --> 00:21:21 is no simple one, one tool or one course available
00:21:21 --> 00:21:25 out there or a magic training available out there
00:21:25 --> 00:21:27 that I can just tell you to do it. And it will
00:21:27 --> 00:21:29 get, it will, it will, you know, help you become
00:21:29 --> 00:21:31 a digital mindset. It's a continuous learning
00:21:31 --> 00:21:34 journey and the continuous learning as I always
00:21:34 --> 00:21:36 say is is not about how many trainings you are
00:21:36 --> 00:21:40 doing, how many hours you are training, hours
00:21:40 --> 00:21:42 are you clocking or how many certificates you
00:21:42 --> 00:21:47 are getting. It is about acknowledging that it
00:21:47 --> 00:21:52 is about the willingness to challenge your very
00:21:52 --> 00:21:55 own assumptions in the light of new knowledge.
00:21:55 --> 00:21:58 That's what growth mindset is all about. That's
00:21:58 --> 00:22:00 what continuous learning is all about. So it's
00:22:00 --> 00:22:03 all behavioral thing. It has to be ingrained
00:22:03 --> 00:22:06 within our DNA. We need to ingrain this within
00:22:06 --> 00:22:10 our DNA in order for us to be able to build that
00:22:10 --> 00:22:12 digital mindset eventually. And it won't happen
00:22:12 --> 00:22:15 overnight, but we have to take the first step
00:22:15 --> 00:22:18 towards it. Yeah, thank you so much for defining
00:22:18 --> 00:22:23 and I'm curious, you know, I understand and I've
00:22:23 --> 00:22:27 seen how inspiring it is for a team to see their
00:22:27 --> 00:22:30 leader. learn something new and share it with
00:22:30 --> 00:22:32 them during meetings. And they're like, OK, so
00:22:32 --> 00:22:34 they're coming up with this. I probably need
00:22:34 --> 00:22:37 to catch up. And I can see how it will contaminate,
00:22:37 --> 00:22:40 quote, unquote, the rest of the team to kind
00:22:40 --> 00:22:43 of elevate their game. So as we look further
00:22:43 --> 00:22:46 and we look further out, if you had to list the
00:22:46 --> 00:22:49 key skills that an AI ready finance professional
00:22:49 --> 00:22:52 needs to develop over the next few years, what
00:22:52 --> 00:22:55 will be on that short list? Of course, the first
00:22:55 --> 00:22:58 one will be that digital mindset, right? As you
00:22:58 --> 00:23:01 always say, but I think as as as future finance
00:23:01 --> 00:23:03 leaders or as future business leader for that
00:23:03 --> 00:23:06 matter, there is one thing which is common, which
00:23:06 --> 00:23:08 is going to be the common through line, which
00:23:08 --> 00:23:10 is actually the common through line at this stage
00:23:10 --> 00:23:13 as well. And that is we are all change managers
00:23:13 --> 00:23:15 of the future. Change is the is the is the I
00:23:15 --> 00:23:18 mean, it's a cliche that change is the only constant.
00:23:18 --> 00:23:23 Yeah. So we need to become excellent at managing
00:23:23 --> 00:23:26 change. which of course requires us to systematically
00:23:26 --> 00:23:28 understand what are the business problems we
00:23:28 --> 00:23:31 are facing, what is in it for people, how we
00:23:31 --> 00:23:34 keep them engaged, how we get them on board with
00:23:34 --> 00:23:37 the change. So first of all, we need to build
00:23:37 --> 00:23:41 our skills around becoming a better change managers.
00:23:41 --> 00:23:47 With AI coming in, there is going to be a lot
00:23:47 --> 00:23:51 of productivity. that comes out of course there
00:23:51 --> 00:23:54 is a lot that happens as a as as part of the
00:23:54 --> 00:23:57 output of technology how that technology works
00:23:57 --> 00:24:01 and what it does is still kind of a black box
00:24:01 --> 00:24:04 and that is where the skill set around responsible
00:24:04 --> 00:24:08 and ethical AI becomes the one of the most important
00:24:08 --> 00:24:13 ones in future as finance leaders we are yes
00:24:13 --> 00:24:16 we and we are in in a bit of um advantageous
00:24:16 --> 00:24:18 position, I would say, because we have a bit
00:24:18 --> 00:24:23 of a head start with our training and our predisposition
00:24:23 --> 00:24:27 to risk and governance and control. We can actually
00:24:27 --> 00:24:30 lead the charge in this responsible and ethical
00:24:30 --> 00:24:33 AI. As CFOs, we can ask the question that why
00:24:33 --> 00:24:36 we should be doing something. What are the consequences
00:24:36 --> 00:24:39 of doing that? How data is currently flowing
00:24:39 --> 00:24:42 through within the organization? Having those
00:24:42 --> 00:24:45 right. making those right investment calls and
00:24:45 --> 00:24:49 right ethical decisions. Again, at the end of
00:24:49 --> 00:24:51 the day, responsible and ethical AI is a proportionate
00:24:51 --> 00:24:54 ownership. It's not something that only CFO can
00:24:54 --> 00:24:57 do or CSO can do or CTO can do. But as a CFOs,
00:24:57 --> 00:25:00 as a finance leaders, due to our position within
00:25:00 --> 00:25:03 the organization and how people look at to us
00:25:03 --> 00:25:06 for the ethical land and governance decisions,
00:25:06 --> 00:25:09 we can influence that a lot. So that is one skill
00:25:09 --> 00:25:11 set, which I always recommend a lot of finance,
00:25:12 --> 00:25:14 which I always recommend to finance leaders to
00:25:14 --> 00:25:18 definitely get ahead on. get understanding of
00:25:18 --> 00:25:20 various aspects around what are the different
00:25:20 --> 00:25:23 regulations coming in, what are the different
00:25:23 --> 00:25:26 policies, procedures, and recommendations that
00:25:26 --> 00:25:28 come from different organizations around global
00:25:28 --> 00:25:32 organizations like UNESCO and OECD around responsible
00:25:32 --> 00:25:36 and ethical AI. Number two. And then I think
00:25:36 --> 00:25:42 another important skill that we need is When
00:25:42 --> 00:25:45 AI is determining the most of the output and
00:25:45 --> 00:25:47 now we are talking about agentic AI, which is
00:25:47 --> 00:25:50 actually have that agency and doing stuff for
00:25:50 --> 00:25:56 us. A lot of times we get there is this curtain
00:25:56 --> 00:26:01 of technology or curtain of AI that sits between
00:26:01 --> 00:26:05 us and the humans on the other side. So understanding
00:26:05 --> 00:26:10 the emotions of humans within the loop, within
00:26:10 --> 00:26:13 that orchestration of agentic AI and all the
00:26:13 --> 00:26:17 technological transformation is the key aspect
00:26:17 --> 00:26:20 and what we can also call emotional intelligence.
00:26:21 --> 00:26:23 That is going to be one of the most, and in fact,
00:26:24 --> 00:26:27 that is one of the most important skill set to
00:26:27 --> 00:26:29 have and to build with us in order for us to
00:26:29 --> 00:26:31 be able to, because again, when we talk about
00:26:31 --> 00:26:35 technology, our first reaction is, oh, it is
00:26:35 --> 00:26:39 going to take away jobs. Yes. We forget that
00:26:39 --> 00:26:45 jobs or roles is just a description on a piece
00:26:45 --> 00:26:50 of paper. There are real humans who are working
00:26:50 --> 00:26:53 around those tasks and roles. What are their
00:26:53 --> 00:26:57 emotions? Being aware of what are their training
00:26:57 --> 00:27:01 needs? How does our decisions are impacting the
00:27:01 --> 00:27:05 people? They are not just the numbers or they
00:27:05 --> 00:27:09 are not just a cog in a machine. They are real
00:27:09 --> 00:27:13 beings, conscious beings who have. emotions who
00:27:13 --> 00:27:15 have social structures who have social needs
00:27:15 --> 00:27:17 so how do we how do we remain aware of their
00:27:17 --> 00:27:20 needs when the technology comes in and take away
00:27:20 --> 00:27:24 and automate roles or automate tasks how do we
00:27:24 --> 00:27:27 separate humans from tasks and how do we understand
00:27:27 --> 00:27:29 their needs and how do we actually help them
00:27:29 --> 00:27:32 grow as well while while of course making some
00:27:32 --> 00:27:34 tough calls as well and they will be tough right
00:27:34 --> 00:27:37 i don't hide away from the fact that roles are
00:27:37 --> 00:27:41 not gonna get impacted they will but how we manage
00:27:41 --> 00:27:43 those Again, goes back to change management as
00:27:43 --> 00:27:47 well as being emotionally aware is a very critical
00:27:47 --> 00:27:51 skill set that we need. Wow. Thank you so much.
00:27:51 --> 00:27:53 That was great because you started with, you
00:27:53 --> 00:27:55 know, when we talk about the skills that people
00:27:55 --> 00:27:58 will need to develop, digital mindset, definitely
00:27:58 --> 00:28:02 change management, having emotional intelligence
00:28:02 --> 00:28:06 and also the idea of responsible and ethical
00:28:06 --> 00:28:09 AI, which that's a part. I don't think we talk.
00:28:09 --> 00:28:13 a lot about. So when we think about responsible
00:28:13 --> 00:28:17 AI, what are some examples of cases maybe where
00:28:17 --> 00:28:21 people used AI in no ethical way, but it made
00:28:21 --> 00:28:23 business sense or it was a good shortcut or productivity
00:28:23 --> 00:28:26 that happened? And what should people really
00:28:26 --> 00:28:30 be aware when it comes to using AI from a governance
00:28:30 --> 00:28:33 and regulation standpoint? You are absolutely
00:28:33 --> 00:28:36 right. First of all, that we don't talk about
00:28:36 --> 00:28:40 it a lot. right and the reason being the technology
00:28:40 --> 00:28:43 takes precedence business use business case takes
00:28:43 --> 00:28:46 precedence and then governance and these ethical
00:28:46 --> 00:28:49 issues or responsibly it comes as a as a second
00:28:49 --> 00:28:52 thought we have seen a lot of examples which
00:28:52 --> 00:28:55 are like pretty available published as well examples
00:28:55 --> 00:28:58 out there where it made a business sense but
00:28:58 --> 00:29:01 it was not ethically ethically right to do so
00:29:01 --> 00:29:05 For example, first of all, before I give an example
00:29:05 --> 00:29:10 of or a case study around that, if we have data
00:29:10 --> 00:29:14 from our customers and we are just using it,
00:29:14 --> 00:29:17 we are just getting that data while we are, of
00:29:17 --> 00:29:19 course, selling them or are transacting with
00:29:19 --> 00:29:22 them. Just because we have that data does not
00:29:22 --> 00:29:27 mean we own it or we can use it to train AI models.
00:29:27 --> 00:29:30 We need explicit consent from them. That is where
00:29:30 --> 00:29:32 these ethical things, ethical dilemmas comes
00:29:32 --> 00:29:35 into play. We need to be very 100 % sure that,
00:29:35 --> 00:29:38 yes, our customers, our consumers are willing
00:29:38 --> 00:29:41 to give us that data. They have already given
00:29:41 --> 00:29:44 us that data, but they are also willing for us
00:29:44 --> 00:29:48 to use that data to train our algorithms. So
00:29:48 --> 00:29:51 for example, I give this example of EverApp.
00:29:52 --> 00:29:55 There was this this app, which was something
00:29:55 --> 00:29:57 like Google Drive, where people would upload
00:29:57 --> 00:30:01 their photos and save their photos. So what they
00:30:01 --> 00:30:04 were doing, they were using those photos or people's
00:30:04 --> 00:30:08 pictures to train some of the algorithms on facial
00:30:08 --> 00:30:10 recognition, and then were marketing and selling
00:30:10 --> 00:30:14 those algorithms to the other companies. So FTC,
00:30:14 --> 00:30:16 Federal Trade Commission, they imposed on them
00:30:16 --> 00:30:21 a penalty called algorithmic disgorgement. So
00:30:21 --> 00:30:23 one thing is ethical, but the other thing is
00:30:23 --> 00:30:25 the regulatory impact as well and getting in
00:30:25 --> 00:30:28 the public eye as well. What that meant for them
00:30:28 --> 00:30:32 was they had to destroy all their algorithms
00:30:32 --> 00:30:35 and had to pay fines and so on and so forth.
00:30:36 --> 00:30:42 So again, we should not be ethical because Just
00:30:42 --> 00:30:44 we should not just be only be ethical because
00:30:44 --> 00:30:48 if we have a if we have a If we if someone if
00:30:48 --> 00:30:51 we have a regulator sitting on top of us, of
00:30:51 --> 00:30:54 course, we should always be ethical But there
00:30:54 --> 00:30:57 are of course those consequences as well secondly
00:30:57 --> 00:31:02 one of the very important aspect that comes out
00:31:02 --> 00:31:05 of AI and and sometimes we we see that a lot
00:31:05 --> 00:31:09 and a output comes from AI and we say that oh
00:31:10 --> 00:31:12 This is the output from the machine. This is
00:31:12 --> 00:31:16 the output from the AI. So as a human, I'm not
00:31:16 --> 00:31:21 responsible for it. However, this is the basic
00:31:21 --> 00:31:26 premise of responsible and ethical AI that any
00:31:26 --> 00:31:30 output by machine is the responsibility of AI
00:31:30 --> 00:31:33 or is the responsibility of a human at the end
00:31:33 --> 00:31:35 of the day. We are responsible for the output
00:31:35 --> 00:31:39 of the AI machines are not so taking that ownership
00:31:39 --> 00:31:43 is critical and that is where where that proportionate
00:31:43 --> 00:31:46 ownership comes into play or CFOs role also can
00:31:46 --> 00:31:48 can play up CFOs can play a bigger role in enabling
00:31:48 --> 00:31:52 that to happen that yes we are saying so I always
00:31:52 --> 00:31:56 say that use up what I what I advise is that
00:31:56 --> 00:31:58 whenever you have these agentic workflows of
00:31:58 --> 00:32:01 today or a eyes of today follow a no offense
00:32:01 --> 00:32:06 policy every AI must have an honor and that honor
00:32:06 --> 00:32:08 should be responsible for the output of the AI.
00:32:08 --> 00:32:11 If something is going wrong, that owner, they
00:32:11 --> 00:32:13 should be raising the, they should be raising
00:32:13 --> 00:32:15 an alarm. They should be raising their hand and
00:32:15 --> 00:32:17 saying, this is not working or this is how it
00:32:17 --> 00:32:21 is. So, I mean, again, it's a complex topic.
00:32:21 --> 00:32:23 It is, it is an, and I would say it's an evolving
00:32:23 --> 00:32:27 field because we, what happened, unfortunately,
00:32:27 --> 00:32:30 I would say, or whatever, for whatever reasons,
00:32:30 --> 00:32:33 commercial reasons or, or, or, or corporate reasons,
00:32:34 --> 00:32:37 we build technology and we did not think about
00:32:37 --> 00:32:39 responsibility and ethics. till a later stage.
00:32:40 --> 00:32:43 So even now we are building technology at a much
00:32:43 --> 00:32:46 faster pace than responsibility or ethical practices
00:32:46 --> 00:32:51 can actually catch up. Wow. Or sometimes I say
00:32:51 --> 00:32:54 that we are, a lot of corporations today are
00:32:54 --> 00:32:57 building technologies around AI and responsible,
00:32:58 --> 00:33:01 sorry, building technologies on around AI and
00:33:01 --> 00:33:05 are expecting someone else to do those regulations
00:33:05 --> 00:33:08 and ethics. So they, their role becomes more
00:33:08 --> 00:33:11 about innovation and they expect governments
00:33:11 --> 00:33:15 or other regulators to come in and build responsible
00:33:15 --> 00:33:18 and ethical AI frameworks. That is not how it
00:33:18 --> 00:33:20 will work or that is not how it is sustainable.
00:33:21 --> 00:33:24 Right from the beginning, right when we are writing
00:33:24 --> 00:33:28 the first code for any algorithm, we need to
00:33:28 --> 00:33:31 have responsible and ethical frameworks in place,
00:33:32 --> 00:33:35 which unfortunately is not. that prevalent at
00:33:35 --> 00:33:38 this stage. But we are moving in that direction.
00:33:38 --> 00:33:41 There are people and organizations who are actually
00:33:41 --> 00:33:47 trying to get ahead of this innovation spree,
00:33:47 --> 00:33:50 if we call it. Wow. But there is so much pressure,
00:33:50 --> 00:33:53 though, and also so much money to be made, right?
00:33:53 --> 00:33:57 So that's usually what gets us in trouble. Yeah,
00:33:57 --> 00:33:59 that's the reason, right? I mean, that's what
00:33:59 --> 00:34:00 I was talking about, that the commercial reasons
00:34:00 --> 00:34:05 have made it much more faster. deploy and then
00:34:05 --> 00:34:08 just you start using AI without understanding
00:34:08 --> 00:34:10 what are the responsible ethical social implications
00:34:10 --> 00:34:13 of those right I mean we don't know today that
00:34:13 --> 00:34:16 the kids that are using we didn't know about
00:34:16 --> 00:34:19 social media how it's gonna impact us we don't
00:34:19 --> 00:34:22 have a clue what AI is going to do for us so
00:34:22 --> 00:34:25 going to do with our brains going to do with
00:34:25 --> 00:34:27 our with the fit how in a society we are operating
00:34:27 --> 00:34:32 as as a human as a human society, the way we
00:34:32 --> 00:34:35 are operating right now, what AI is going to
00:34:35 --> 00:34:37 do with that. We don't know because we haven't
00:34:37 --> 00:34:40 really thought about it. It is coming as an afterthought
00:34:40 --> 00:34:43 once everything, everyone has started to actually
00:34:43 --> 00:34:47 use AI, which is scary as well. It is. It is.
00:34:47 --> 00:34:50 It's very scary. And I'm curious to hear from
00:34:50 --> 00:34:52 your standpoint as you think about the current
00:34:52 --> 00:34:56 organization, the current structure or org chart
00:34:56 --> 00:34:59 of a finance department, what do you see as roles
00:34:59 --> 00:35:02 that will disappear, how some of the current
00:35:02 --> 00:35:06 roles will evolve, like who will be part, besides
00:35:06 --> 00:35:09 AI agent, because now I'm getting used to the
00:35:09 --> 00:35:11 idea that it will be AI agent on, you know, on
00:35:11 --> 00:35:15 finance teams, like, beyond that, like, how are
00:35:15 --> 00:35:17 those roles that we're used to going to evolve?
00:35:18 --> 00:35:22 It's, it's the, again, first of all, the, the,
00:35:22 --> 00:35:27 the, the org chart is, is one thing is for sure
00:35:27 --> 00:35:29 that org chart is going to change. Right? It
00:35:29 --> 00:35:33 is and it is already already we see that happening
00:35:33 --> 00:35:39 in many organizations today. Now, many of the
00:35:39 --> 00:35:43 roles might not even exist today. So, for example,
00:35:44 --> 00:35:46 when AI agents as we talked about when they are
00:35:46 --> 00:35:49 carrying out the actual execution solving problems,
00:35:50 --> 00:35:54 we will need humans in the loop. Right? And these
00:35:54 --> 00:35:58 could be your FPNA AI controllers, your AI agent
00:35:58 --> 00:36:02 coordinators, your AI controllers. Again, it's
00:36:02 --> 00:36:05 very hard to say what they will look like, what
00:36:05 --> 00:36:07 those role will exactly look like. And I acknowledge
00:36:07 --> 00:36:13 that we don't know 100%. We can think of having
00:36:13 --> 00:36:15 some of those humans in the loop within that
00:36:15 --> 00:36:18 structure, of course, definitely. But again,
00:36:18 --> 00:36:20 we don't know clearly at this stage that how
00:36:20 --> 00:36:23 that exact organization are going to look like.
00:36:24 --> 00:36:25 Having said that, of course, there will be roles
00:36:25 --> 00:36:30 that will disappear. those roles that are that
00:36:30 --> 00:36:32 have some problems even even the rules which
00:36:32 --> 00:36:35 require judgment and some problem solving agents
00:36:35 --> 00:36:39 can actually do that quite quite diligently and
00:36:39 --> 00:36:42 in a quite efficient way, but we will have a
00:36:42 --> 00:36:45 lot of new roles as I mentioned coming up which
00:36:45 --> 00:36:48 will help us to ensure that a agents are performing
00:36:48 --> 00:36:52 their tasks correctly or Identifying their objectives
00:36:52 --> 00:36:55 or defining their objectives or what problems
00:36:55 --> 00:36:59 they are going to solve Wow Now irrespective
00:36:59 --> 00:37:02 of whatever happens in that scenario how that
00:37:02 --> 00:37:05 future org structure looks like it's evolving
00:37:05 --> 00:37:07 at this stage So it's hard to say that whatever
00:37:07 --> 00:37:11 it will exactly look like one role or one skill
00:37:11 --> 00:37:16 is going to Stay and that is I think is is going
00:37:16 --> 00:37:20 to become the most relevant for anyone in the
00:37:20 --> 00:37:22 org structure And that is the critical thinking
00:37:22 --> 00:37:28 piece With AI, we have too many answers. True.
00:37:28 --> 00:37:31 You ask something, you just ask a simple anything,
00:37:31 --> 00:37:33 you know, do this or do that. It'll give you
00:37:33 --> 00:37:36 a page full of recommendations and stuff. The
00:37:36 --> 00:37:40 most important skill is the ability to ask the
00:37:40 --> 00:37:45 right questions. As humans or as finance leaders,
00:37:45 --> 00:37:51 we will need to be able to identify or define
00:37:51 --> 00:37:54 those objectives. or the right, we cannot just
00:37:54 --> 00:37:57 say to AI agents or just do this, process this
00:37:57 --> 00:38:00 invoice. We need to be able to ask the right
00:38:00 --> 00:38:02 questions. We need to be able to define the right
00:38:02 --> 00:38:06 objectives for us to be able to leverage the
00:38:06 --> 00:38:08 maximum out of these machines. Why an output
00:38:08 --> 00:38:10 of machine is what it is. We should be able to
00:38:10 --> 00:38:12 ask those questions. We cannot just rely on the
00:38:12 --> 00:38:15 output of machine, become victim of automation
00:38:15 --> 00:38:18 bias, because if it's created by machine, it
00:38:18 --> 00:38:21 must be correct. No, we cannot do that. We have
00:38:21 --> 00:38:23 to have that critical mindset, that critical
00:38:23 --> 00:38:25 thinking, that why should we even rely on the
00:38:25 --> 00:38:28 output of a machine? This ability to challenge
00:38:28 --> 00:38:32 the status quo, I think, is the single most important
00:38:32 --> 00:38:35 skill or the role that humans are going to play
00:38:35 --> 00:38:39 in that new org chart. So everyone is going to
00:38:39 --> 00:38:43 become that change manager, that critical thinker,
00:38:43 --> 00:38:47 for them to be able to orchestrate orchestrate
00:38:47 --> 00:38:49 these different machines or these different agents
00:38:49 --> 00:38:51 together to solve the business problems that
00:38:51 --> 00:38:54 we are trying to solve. Yeah, and I can see how
00:38:54 --> 00:38:56 you have to have that critical thinking to ask
00:38:56 --> 00:38:59 the right question, but also validate, like you
00:38:59 --> 00:39:02 said, whatever the output that you needed on
00:39:02 --> 00:39:06 both sides from end to end, which is quite. Quite
00:39:06 --> 00:39:09 fascinating, but I'm curious to also hear, what
00:39:09 --> 00:39:13 are some AI cases that you think CFOs should
00:39:13 --> 00:39:17 try? Or maybe a use case where it sounded great
00:39:17 --> 00:39:19 and it didn't go so well. I'm curious if you
00:39:19 --> 00:39:23 have any success stories or any oops, it didn't
00:39:23 --> 00:39:26 work out as we planned stories. Yeah, so again,
00:39:26 --> 00:39:29 I mean, to answer your first part of the question,
00:39:29 --> 00:39:32 the art. What are some of the use cases that
00:39:32 --> 00:39:35 a let's see if we should try now I always say
00:39:35 --> 00:39:38 that it all depends on first of all the business
00:39:38 --> 00:39:41 needs Right. What is our strategy? How we are
00:39:41 --> 00:39:44 what is our broader vision that looks like? So
00:39:44 --> 00:39:47 so I mean rather than going a use case approach
00:39:47 --> 00:39:49 or point solutions look at the bigger picture
00:39:49 --> 00:39:53 But as a as a principle as a principle I would
00:39:53 --> 00:39:56 recommend go for the well -established and mature
00:39:56 --> 00:39:58 solutions in the beginning You don't need to,
00:39:58 --> 00:40:02 yes, AI agents or generative AI looks very fancy
00:40:02 --> 00:40:05 on the paper, but technology is still evolving.
00:40:05 --> 00:40:07 It is untested in many business scenarios. We
00:40:07 --> 00:40:11 have seen it not scaling in MIT's report, that
00:40:11 --> 00:40:13 famous report that came up a couple of months
00:40:13 --> 00:40:18 ago that 95 % of AI pilots cannot scale because
00:40:18 --> 00:40:21 I mean, the technology, it's nothing wrong, nothing
00:40:21 --> 00:40:23 inherently wrong with the technology, but it's
00:40:23 --> 00:40:25 just in its maturity phase. It's going through
00:40:25 --> 00:40:28 that hype cycle right now where where it will
00:40:28 --> 00:40:30 eventually get mature and it will eventually
00:40:30 --> 00:40:34 have more use cases. So go for those well -established
00:40:34 --> 00:40:37 use cases like process mining to understand your
00:40:37 --> 00:40:41 business processes. And secondly, I mean, I always
00:40:41 --> 00:40:44 say that if you are in 2026 and you are still
00:40:44 --> 00:40:50 using Excel or you're still doing manual three
00:40:50 --> 00:40:53 -way matches. You are you are not in 2026. You
00:40:53 --> 00:40:56 should be using air and you should be using AI
00:40:56 --> 00:40:59 for your account receivable account payable They
00:40:59 --> 00:41:01 are much more there are a lot of established
00:41:01 --> 00:41:04 solutions out there well tested in the market
00:41:04 --> 00:41:09 you can you can actually get a lot of uh sort
00:41:09 --> 00:41:12 of endorsements from from other other industry
00:41:12 --> 00:41:14 partners as well to understand that okay what
00:41:14 --> 00:41:16 how did it work or how it did not work so there
00:41:16 --> 00:41:19 are a lot of feedback available for those uh
00:41:19 --> 00:41:21 for those tools as well so you can actually actually
00:41:21 --> 00:41:24 find a good mature solution because these problems
00:41:24 --> 00:41:27 have been quite they're common across across
00:41:27 --> 00:41:30 all the verticals mostly across every organization
00:41:30 --> 00:41:32 has a problem Every organization had bank recon
00:41:32 --> 00:41:35 problem. So there are quite a good mature solutions
00:41:35 --> 00:41:37 available out there, which you can then try on
00:41:37 --> 00:41:40 and use. So go with those in the beginning and
00:41:40 --> 00:41:44 then more on the on the planning side, go for
00:41:44 --> 00:41:48 demand forecasting. Again, quite a mature predictive
00:41:48 --> 00:41:51 analytics capability is out there. People are
00:41:51 --> 00:41:55 now actually being able to predict their revenues.
00:41:55 --> 00:41:57 And I bought on something as well a couple of
00:41:57 --> 00:42:01 a couple of months ago. revenue analytics or
00:42:01 --> 00:42:05 demand forecasting is another very well established
00:42:05 --> 00:42:10 and a good use case for AI to for finance to
00:42:10 --> 00:42:12 start their journey in the beginning. And of
00:42:12 --> 00:42:15 course, it depends on again, as I said. a caveat
00:42:15 --> 00:42:17 that it depends on what's your current business
00:42:17 --> 00:42:20 need. It might not be demand forecasting, but
00:42:20 --> 00:42:23 having said that, thinking in those directions,
00:42:23 --> 00:42:25 the principle remains the same that go with the
00:42:25 --> 00:42:29 traditional AI and think about those traditionalistic,
00:42:29 --> 00:42:34 more deterministic artificial intelligence. Okay,
00:42:34 --> 00:42:36 one use case to coming to your second question
00:42:36 --> 00:42:39 around that use case which didn't work Well,
00:42:39 --> 00:42:42 I think is is more around and I gave this example
00:42:42 --> 00:42:44 quite often in my workshops as well is around
00:42:44 --> 00:42:47 the cash flow forecasting Okay, somehow couple
00:42:47 --> 00:42:50 of years ago. We were sold when you know before
00:42:50 --> 00:42:52 generating AI and all that high we were sold
00:42:52 --> 00:42:58 this this concept that AI is pretty when it takes
00:42:58 --> 00:43:01 is going to just Use your cat you just forecast
00:43:01 --> 00:43:03 your cash flow and all your training needs right
00:43:03 --> 00:43:05 away and you know, you're you're just have your
00:43:05 --> 00:43:09 cash flow for That I believe is is a bit of and
00:43:09 --> 00:43:12 I have seen that not working because That is
00:43:12 --> 00:43:15 bit of an over simplification of the problem
00:43:15 --> 00:43:18 at hand cash flow at the end of the day is not
00:43:18 --> 00:43:21 just a number It's a function of your receivables
00:43:21 --> 00:43:24 payables your funding your financing activities
00:43:24 --> 00:43:27 operating activities so on so forth It is much
00:43:27 --> 00:43:30 more complicated than just a simple number. So
00:43:30 --> 00:43:32 if we are just using historical cash flows to
00:43:32 --> 00:43:35 predict the future outflow, we are just using
00:43:35 --> 00:43:37 a statistical model. We are not really using
00:43:37 --> 00:43:40 predictive analytics in its true sense. What
00:43:40 --> 00:43:44 we need to do in order for us to make the right
00:43:44 --> 00:43:46 treasury decisions or cash flow decisions, we
00:43:46 --> 00:43:50 need to, in fact, be predicting our accounts
00:43:50 --> 00:43:53 receivable correctly. We must be predicting our
00:43:53 --> 00:43:57 accounts payable correctly, other cash inflows
00:43:57 --> 00:44:00 and outflows correctly in order for us to then
00:44:00 --> 00:44:02 eventually get that cash flow forecasting right.
00:44:03 --> 00:44:05 I see a lot of times, in fact, in one particular
00:44:05 --> 00:44:09 case is see, I saw that happen where the organization
00:44:09 --> 00:44:12 just used a simple historical cash flows to predict
00:44:12 --> 00:44:15 the future cash flow, which is again, nothing
00:44:15 --> 00:44:19 better than you know, what I would do as a last
00:44:19 --> 00:44:22 last 12 months moving average and then just say
00:44:22 --> 00:44:24 that, you know, this is what it is. There is
00:44:24 --> 00:44:27 no underlying business driver that you're going
00:44:27 --> 00:44:32 to be using to predict that cash flow. So these
00:44:32 --> 00:44:34 are some of the things we need to really think
00:44:34 --> 00:44:37 about in a little bit more deeper way, unpack
00:44:37 --> 00:44:39 what that is all about. And again, it comes down
00:44:39 --> 00:44:41 to that building, again, that digital mindset,
00:44:42 --> 00:44:44 the ability to critically think and evaluate,
00:44:44 --> 00:44:46 ask the right questions, and understand what
00:44:46 --> 00:44:50 is it that we need to really solve and then go
00:44:50 --> 00:44:53 from there. Wow, thank you so much for sharing.
00:44:54 --> 00:44:56 And I think sometimes we oversimplify things
00:44:56 --> 00:45:00 because it's sales versus the actual substance.
00:45:02 --> 00:45:05 Because I was in the midst of implementing a
00:45:05 --> 00:45:08 new system and they were like, oh, with this
00:45:08 --> 00:45:10 new system, it has AI will allow you to do X,
00:45:10 --> 00:45:15 Y, Z. Yeah. And, and, and the, the, the epic
00:45:15 --> 00:45:18 one is that where people just look at, just get
00:45:18 --> 00:45:21 that chat GPT enterprise version or, or copilot
00:45:21 --> 00:45:25 enterprise version. And they expect that to somehow
00:45:25 --> 00:45:28 transform their data, somehow transform their
00:45:28 --> 00:45:31 operations. Yeah. That's not what chat GPT is
00:45:31 --> 00:45:33 meant to do. It's a very different, it's just
00:45:33 --> 00:45:37 a sort of a chat pod, which can work in a very
00:45:37 --> 00:45:40 different way. For your data to work, you need
00:45:40 --> 00:45:42 to have something else. You need to build data
00:45:42 --> 00:45:45 pipelines. You need to build data architect,
00:45:45 --> 00:45:47 right? Data architectures, right? Mindset to
00:45:47 --> 00:45:50 be able to use that. So again, oversimplifying
00:45:50 --> 00:45:55 this or seeing the generative AI or agentic AI
00:45:55 --> 00:45:58 as a magic, as something like a silver bullet,
00:45:59 --> 00:46:00 that's pretty common. You are absolutely right.
00:46:01 --> 00:46:03 Yeah. So I'm now curious, how would you finish
00:46:03 --> 00:46:07 this sentence? The finance teams that will thrive
00:46:07 --> 00:46:11 with AI are the ones that... Okay, that's a good
00:46:11 --> 00:46:17 one. The finance teams, do you mind saying that
00:46:17 --> 00:46:20 again? The finance teams that will thrive. So
00:46:20 --> 00:46:22 how would you finish this sentence? The finance
00:46:22 --> 00:46:26 teams that will thrive with AI are the ones that...
00:46:26 --> 00:46:30 Are the ones that will... re -imagine the fundamental
00:46:30 --> 00:46:35 of business operating model. Again, as I always
00:46:35 --> 00:46:38 say, now to unpack this a little bit, we need
00:46:38 --> 00:46:40 to, that's what even my book re -imagine finance
00:46:40 --> 00:46:43 is all about, right? We need to re -imagine what
00:46:43 --> 00:46:47 AI, how AI can help us transform our business
00:46:47 --> 00:46:49 operations. How do we fundamentally redesign
00:46:49 --> 00:46:51 our operating models? Because the current operating
00:46:51 --> 00:46:54 model is not working, is not going to work in
00:46:54 --> 00:47:01 the new digital or DNA or the... with AI not
00:47:01 --> 00:47:05 just helping us do work, actually doing the work.
00:47:05 --> 00:47:07 So you need a redesigned or reimagined business
00:47:07 --> 00:47:13 operating model. That was a good one. Thank you.
00:47:13 --> 00:47:15 Thank you so much for sharing. I have a last
00:47:15 --> 00:47:18 question coming up on time here. So away from
00:47:18 --> 00:47:20 your work in AI, I'm always curious to hear what's
00:47:20 --> 00:47:23 your favorite thing to do outside of work? Oh,
00:47:23 --> 00:47:26 I love I love music. You might just notice that
00:47:26 --> 00:47:29 there is this. So I like Pakistani classical
00:47:29 --> 00:47:34 music. So I do a bit of that helps me with refreshing
00:47:34 --> 00:47:37 my thoughts. And it's quite refreshing, actually.
00:47:37 --> 00:47:42 Then I'm also I also do a bit of oil portraits.
00:47:43 --> 00:47:46 I'm a bit of I have this thing around around
00:47:46 --> 00:47:51 art. So I do portraits as well. And I'm a bit
00:47:51 --> 00:47:54 of a history buff. In fact, that's why a lot
00:47:54 --> 00:47:59 of references in my book as well are based on
00:47:59 --> 00:48:04 human history. So I love that as a subject to
00:48:04 --> 00:48:07 understand how as humans we evolved, how where
00:48:07 --> 00:48:14 we are today, how did we get here? That's something
00:48:14 --> 00:48:16 which fascinates me a lot. And I think a lot
00:48:16 --> 00:48:19 of that has implication towards where we are
00:48:19 --> 00:48:22 going. and AI in the picture, how that is going
00:48:22 --> 00:48:26 to look like. So yeah, this is what I like doing
00:48:26 --> 00:48:30 when I'm not talking about AI. That is so good.
00:48:30 --> 00:48:33 And I love history as well, because it reminds
00:48:33 --> 00:48:36 you that you think you're different, but you're
00:48:36 --> 00:48:40 really not. You're really not. Yeah, we are overestimating
00:48:40 --> 00:48:46 our intelligence. See, I mean, we have already
00:48:46 --> 00:48:51 built things which can actually totally destroy
00:48:51 --> 00:48:54 humanity, right? So I don't think so. We are
00:48:54 --> 00:48:59 very smart. We are probably the only, as Yuval
00:48:59 --> 00:49:01 Noah Harari says, one of my favorite historians,
00:49:01 --> 00:49:04 he says that, he is my favorite historian, that
00:49:04 --> 00:49:08 we are the only species who has the ability to
00:49:08 --> 00:49:15 destroy ourselves fully. But it's so true. Well,
00:49:16 --> 00:49:19 thank you. Thank you so much, Derek, for being
00:49:19 --> 00:49:22 on the show. I really, really enjoyed our conversation.
00:49:23 --> 00:49:25 Same here. Thank you so much, Wasiyah, for having
00:49:25 --> 00:49:27 me on the show, and I loved our conversation
00:49:27 --> 00:49:30 as well. Thank you. Thank you. Thank you for
00:49:30 --> 00:49:32 tuning in to another episode of the Diary of
00:49:32 --> 00:49:35 a CFO podcast. If you found the conversation
00:49:35 --> 00:49:37 helpful, please don't forget to leave us a review
00:49:37 --> 00:49:40 on Apple, Spotify, YouTube, wherever you find
00:49:40 --> 00:49:42 it, because it really helps getting these kinds
00:49:42 --> 00:49:45 of insights in front of other finance leaders.
00:49:45 --> 00:49:47 If you want to go deeper, don't forget to visit
00:49:47 --> 00:49:50 thediaryofacfo .com for additional resources
00:49:50 --> 00:49:53 that you can use in your career today.