Building an AI-Ready Finance Team Without Losing the Human Side, with Tariq Munir
The Diary of a CFOFebruary 12, 202600:49:53

Building an AI-Ready Finance Team Without Losing the Human Side, with Tariq Munir

In this episode of The Diary of a CFO Podcast, host Wassia Kamon sits down with Tariq Munir, digital transformation advisor and author of Reimagine Finance, to explore what it actually takes to build an AI-ready finance team.

In this episode of The Diary of a CFO, we examine what it actually takes to build an AI-ready finance function and why most organizations are getting it wrong. Tariq Munir, LinkedIn Top Voice and author of Reimagine Finance, brings 20 years of experience advising Fortune 500 leaders on digital transformation, and makes the case that the biggest barriers to AI adoption in finance are not technical. They are behavioral.

How Do You Build an AI-Ready Finance Team Without Losing the Human Side?

Everyone is racing to adopt AI right now, and most are starting in the wrong place. The instinct is to find tasks to automate or tools to deploy. The reality is that AI adoption fails not because of the technology itself, but because the teams using it were never set up to think in the way AI demands.

In this episode of The Diary of a CFO, Wassia Kamon speaks with Tariq Munir about what genuine AI readiness looks like for finance teams, how to build the digital mindset that makes AI work, and where the real risks lie when governance and ethical thinking are treated as afterthoughts. Tariq is a LinkedIn Top Voice, author of Reimagine Finance, and has spent two decades in the rooms where these decisions are being made at the Fortune 500 level.

This episode is for CFOs and finance leaders trying to navigate the pressure to adopt AI without losing strategic clarity, for team builders who want to elevate their function without replacing the humans in it, and for anyone who wants a grounded, practical framework for what AI-ready actually means in day-to-day finance work. Listeners will leave with a clearer picture of what to fix before deploying any technology, which use cases are worth starting with, and what skills will matter most in the finance function of the future.

Why This Episode Matters

  • If your organization is racing toward AI adoption without a clear strategy, this episode reframes the conversation around the mindset and process foundations that determine whether any technology actually delivers value.

  • If you are a finance leader worried about what AI means for your team and your role, this episode offers a clear-eyed view of what will change, what will not, and what skills position you well for what is coming.

  • If you want to use AI responsibly without exposing your organization to regulatory or reputational risk, this episode addresses the governance and ethical dimensions that most AI conversations skip entirely.

Key Takeaways

  • AI readiness is a mindset issue before it is a technology issue. A data-driven culture, streamlined workflows, and a willingness to experiment in a structured way are prerequisites, not byproducts, of successful AI adoption.

  • Experimentation and mistakes are not the same thing. Thoughtful experimentation is tied to specific business problems with defined outcomes. Sending an AI-generated board report without review is not an experiment — it is a risk.

  • The finance professionals who will thrive are those who develop critical thinking, change management, and emotional intelligence alongside digital literacy. These human skills become more valuable as AI handles more of the execution.

  • Responsible and ethical AI is not a compliance checkbox. Finance leaders are uniquely positioned to own this conversation inside their organizations, given their existing role in governance, risk, and investment decisions.

Questions This Episode Answers

What does an AI-ready finance team actually look like in day-to-day practice?

What are the most common mistakes finance teams make when adopting AI?

Which AI use cases should CFOs prioritize first and which ones are overhyped?

What skills will finance professionals need to stay relevant as AI reshapes the function?

How should finance leaders approach AI governance and responsible use?

How Do You Know If Your Finance Team Is Actually Ready for AI?

The single most reliable indicator of AI readiness in a finance team has nothing to do with software. It is whether data actually drives decisions. If the monthly forecast gets overridden by the most senior person in the room regardless of what the numbers say, if processes run on manual workarounds because the underlying systems were never properly designed, if the culture treats every new approach as a risk to be avoided rather than a hypothesis to be tested, no AI implementation will fix that. It will amplify it.

This means the work of getting ready for AI is largely cultural and operational. It requires building genuine trust in data as a decision-making input, not just a reporting output. It requires simplifying and standardizing workflows before layering automation on top of them. And it requires creating enough psychological safety that teams can run structured experiments, defined by clear business problems, measured against specific outcomes, without every unfamiliar result being treated as a failure.

The distinction between experimentation and mistakes matters enormously here. Experimentation is earned through clarity of purpose: identifying the business problem, evaluating solutions, and designing a test with guardrails. Submitting an AI-generated compliance report without review is not experimentation. It is an abdication of the human accountability that makes finance functions trustworthy in the first place.

Which AI Use Cases Should CFOs Prioritize and Which Ones Should They Approach With Caution?

The most common trap in AI adoption is leading with the tool rather than the problem. Point solutions that automate individual tasks may deliver narrow productivity gains, but AI's real value emerges when it integrates across the organization, connecting data flows, surfacing insights that cut across functions, and enabling better decisions at the system level rather than the task level.

For CFOs beginning their AI journey, the strongest starting point is proven, mature technology rather than the most visible or exciting new capability. Accounts payable and receivable automation, bank reconciliation, and process mining are well-tested across industries, have established feedback loops, and carry far lower implementation risk than emerging generative AI applications. Demand forecasting and revenue analytics represent strong next steps, provided the underlying data architecture is solid enough to support meaningful prediction.

The cautionary note is on cash flow forecasting as an AI use case not because it is the wrong goal, but because it is frequently oversimplified. Cash flow is a function of receivables, payables, financing activities, and operating drivers. A model that predicts future cash flows from historical cash flow patterns alone is little more than a moving average dressed up in AI language. Getting cash flow forecasting right with AI requires solving the component predictions first, then assembling them into a coherent picture. The temptation to skip that complexity is exactly where AI projects fail to deliver.

What Skills Will Finance Professionals Need to Stay Relevant as AI Reshapes the Function?

As AI agents take on more of the execution work in finance, the premium on distinctly human capabilities rises. Critical thinking, the ability to ask the right question, challenge an AI-generated output, and evaluate whether the answer actually addresses the underlying business problem, becomes the baseline skill for anyone working alongside intelligent systems. Automation bias, the tendency to accept machine-generated outputs without scrutiny simply because a machine produced them, is one of the most underappreciated risks in AI-assisted finance work.

Change management will be just as foundational. Finance leaders are already change managers in everything but name, and as AI reshapes roles and workflows, the ability to bring people along, communicate the human implications of technological change, and maintain team cohesion through disruption will separate effective CFOs from technically proficient ones. This is inseparable from emotional intelligence, remaining aware of the real impact on real people as automation reshapes what their work looks like day to day.

The third capability is responsible and ethical AI literacy. Finance leaders are well positioned to own this conversation inside their organizations. The CFO's existing mandate around governance, risk, and investment decisions creates natural authority to ask hard questions about data ownership, algorithmic accountability, and the human consequences of automation. Every AI system should have a named owner responsible for its outputs. That principle of proportionate ownership needs a champion inside the organization, and in most cases, the CFO is the right person to hold that line.

Resources Mentioned

  • Guest: Tariq Munir, LinkedIn Top Voice, Author of Reimagine Finance

  • Book: Reimagine Finance by Tariq Munir

  • CFO Readiness Assessment: diaryofacfo.com — 25 questions, personalized readiness score

  • Topics: AI Readiness, Digital Mindset, Data-Driven Decision Making, Responsible AI, Change Management, Agentic AI

Subscribe to the Finance Executive Track newsletter for actionable insights to become the obvious choice for top finance roles and thrive once you get there

https://the-finance-executive-track.kit.com/signup

Download the free guide on AI prompts every finance leader needs:
https://www.wassiakamon.com/prompts

If you liked this episode, listen next to:

Learn more about Wassia Kamon and The Diary of a CFO at thediaryofacfo.com.

 

About The Diary of a CFO

The Diary of a CFO is a podcast about modern finance leadership, hosted by award‑winning CFO Wassia Kamon. The show is for current CFOs, emerging finance leaders, FP&A professionals, and founders who work closely with finance teams.​

Each episode explores how CFOs and senior finance executives build high‑performing finance and FP&A teams, partner with CEOs, boards, and capital providers (banks, PE/VC, and impact lenders), and navigate growth, regulation, and transformation without burning out.

TRANSCRIPT

[00:00:00] When you say AI ready finance team, what does that actually look like in reality, day to day? Look at just one day in practice, which can actually give you a good litmus test if you are ready to have AI in your business or not. And that one litmus test is understanding or evaluating how are you making your decisions today?

How you prepare that mindset to be willing to. Experiment and start moving toward AI readiness in finance. So we need to very clearly understand that difference between experimentation and mistakes. Everyone is racing to adapt AI right now, and most people are getting it wrong. They're using technology to replace humans when they should be using it.

To make humans more powerful. My guest today has spent 20 years in the rooms where those decisions are getting made, advising Fortune 500 leaders on how to get this right. Tariq Munir is my guest today is the LinkedIn Top Voice and the author of Reimagine [00:01:00] Finance. So if you're trying to figure out how to stay ahead in your career, how to make sure your team is elevated to be ready for ai, or how to just make sure that you're using AI without losing yourself.

Self, this episode is for you. Let's dive in. Hello and welcome to the diary of a CFO podcast. I'm your host with Wassia Kamon. I'm A CFO with a background in accounting FBNA, and I started this show to talk about what leading in finance really looks like and what it takes to become A CFO. Each week we explore how to today's top finance leaders build high performing teams, partner with CEOs and boards, and lead through growth and transformation without burning out in the process.

Today I am super delighted to have with me Derek Mir. Welcome to the show. Derek. Thank you Baia for having me. Super excited to be here. Oh, same here. Because I really want us to be able to talk about how to build a AI finance, a AI ready finance team in 12 months without losing that human side, which is really what I really liked about the angle you took in your book.

So I'm curious to understand from you, when you say AI [00:02:00] ready finance team, what does that actually look like? In reality day to day. So a great question was, yeah. And well, you know, theoretically we can do a whole lot of readiness assessments. We can do all those pre, pre-assignment or pre fill out checklists, do all those, uh, sort of questionnaires.

Mm-hmm. However. I always say that look at just one day in practice that mm-hmm. Which will, which can actually give you a good litmus test if you are ready for ai, ready to have AI in your business or not. Okay. And that one litmus test is understanding. Or evaluating, how are you making your decisions today?

Okay. If data is just another chart on a PowerPoint deck and that is it and does not fundamentally drive how you are making decisions, I'm sorry. You are not ready. Oh, no. Most of us think, most of us think that once we get ai, we will [00:03:00] become data driven because then we'll have more insights, faster insights, and so on, so forth.

It is in fact the other way around. Mm. To really leverage ai, we need to have a data driven mindset, because at the end of the day, data-driven is not a technology issue. Right. It's a, it's a mindset thing, right? We need to be able to, to, to use the output of machine, rely on the output of the machine, and to do that, we need to be able to have that courage to take a decision based on data.

Okay. While our gut or instinct tells us otherwise, because a lot of times we go with our gut, we go with our instinct, and I don't say that, you know, don't go with your gut or instinct. That's what makes us, makes us uniquely human, though uhhuh. But relying on data and data is telling us an entirely different story is what makes us ready to go for any technology, for that matter.

Oh, I would like to see how it works in, you know, in, in real life. Right. So let's say you walk into a finance department today, what are the signal that will tell you [00:04:00] they're not ready for ai? Right. And that's even if, like you said, the technology are there. So I walk into a finance team and what tells you that they're not data driven enough already.

To be ready for ai. So as I, as I mentioned, uh, was here, if most of the decisions are still based on gut feel or, you know, the, the person instead of, of the merit or what the data is telling us, they are based on the hierarchy of, of decision making or hierarchy of people in the organization. I, I'm, we, we are not data driven then, right?

Oh yeah. We are. We are, we are more people driven. We are not, sorry, people driven, I would say we are more, um. Guided by the instinct and gut as opposed or, and our exp experience as opposed to what data is telling us. Apart from that, I would, I would like to point out two more things, right? Mm-hmm. One is of course, being data driven.

Secondly, it's about workflows being too complex. So if I walk into a finance [00:05:00] function and their processes are broken, their processes are not supporting automation. There are too many manual workarounds and. Manual workarounds is not a bad thing per se, but doing it in the form of something like, you know, creating a manual workaround to overcome some system issue is mm-hmm.

Is what I'm talking about. Okay. So if those workflows are too complex, all we are going to do using AI is, is amplify that complexity. So I, I always, always, um, you know, start off with, with, with streamlining those workflows. Okay. And lastly, a very important element I always, uh, always recommend and advise my clients around is that if people are hesitant to experiment, everything needs to be safe.

Everything needs to be risk averse being financed. We are risk averse. That's how we are trained, right? Mm-hmm. So. But AI on the other hand, requires some experimentation. It's not a kind of an [00:06:00] implementation which will happen like an ERP or any other cloud or SaaS solution where there'll be go clear, go, no go, no go decisions, and you will always have an output, which is not very accurate.

Then you will retrain your models to get more accurate output. Mm-hmm. And so on, so forth. So you need to have that experimentation mindset. Okay. Now these are the things which I'm talking about right now. Does not really need technology at all. As you would have noticed, right? These are mostly mindset driven.

These are mostly behavior driven things, which would actually drive and get you ready to leverage ai. And I also don't say that you do all of these things and then go for ai. Of course, that's also not. Practically possible, but starting to realize and starting to acknowledge that we need to change is the first step.

Mm-hmm. And then working on this, while you are of course, working on your AI and digital roadmaps as well, is what sets us, sets us up for success. Okay, so if I understand correctly, we start with the idea of we need to be more data driven. So let's say I'm running a [00:07:00] forecast and for the past three years I see that sales have been increasing at 5%.

If it's data driven, when I'm looking at my next year, I should, you know, also feel like it's going up, for example. But let's say the marketing director comes in and say, oh no, let's lower it. Right? So now I'm almost not as data driven, but now it gets into the politics. Is that correct? Yeah, yeah, yeah. You, you, you have absolutely, you are, you're absolutely, uh, nailing it.

And that's where the biggest issue comes in as well. Right. If market is sending us different signals. Mm-hmm. But because of our sales or marketing directors based on their, or even CFOs for that matter, based on their experience, they believe that this is something which is temporary or you know, it's fine.

We'll be able to get over with that. Or we have done that in the past and we know how it's going to work. So if we ignore what data is telling us mm-hmm. Then of course we are not data driven. Having said that, there is always that balance between the two. Yes, as a human, we [00:08:00] must use our experience. This is what makes us at the end of very uniquely human.

Mm-hmm. But what I'm saying is that if data is telling us an entirely different story and our gut and instinct is telling us an entirely different story, we need to really critically evaluate that situation as opposed to just, you know, saying or no data is fine, data is not right. Or. You know, most of the time you might have heard in a boardroom or in those, those planning meetings, oh, I don't, I don't trust the number.

I don't know if this number is correct or not, and so on, so forth. So again, there are a lot of different elements associated with building that trust as well. Mm-hmm. But inherently not trusting data or inherently having that outset. Attitude towards not relying on data. Mm-hmm. Or decision making is, is, is is how we, we, we, uh, you know, drop the ball on being data driven.

Okay. And then I understand the part about the processes where you have to do patchwork. So if you used to do it, that I can imagine how it will be. Hard to automate those kind of things with, with technology, right? You have to fix [00:09:00] how you're doing things first. But then when you talk about that mindset of experimenting, like you said in finance, it's hard to just experiment.

'cause it's like compliance, it's reporting to the board. If there is an hallucination or something like that, like how do you overcome that? How do you prepare that mindset to be willing to experiment and start moving toward AI readiness in finance? Great question was, yeah, and whenever I say experimentation, we need to just step back a little bit.

Okay. Experimentation is one thing, and making mistakes is another thing, right? If we are. We are trying to do something. Say for example, I get a chat DPT or some enterprise version, and I'm just using it to it to as an experiment In my mind, I'm just using it to create a board report and then I just send it through to, uh, to, to board of directors.

That's not an experiment. [00:10:00] That's a mistake. Mm. Right. So, so, so we need to very clearly understand that difference between experimentation and mistakes, number one. Number two, experimentation is not something we just do. Like, I mean, we are not sitting in a lab and we are just trying to, even, even, even scientists don't do experiments by that, right?

By. That they just start combining few things and think about, okay, something comes, combine tomato and oranges. Yeah. Right. Something comes up experimentation. You, you what I, what I what I always say that you need to earn the right to experiment. And how you do that is by clearly understanding, first of all, what are your clear business needs.

Mm-hmm. Number one. What are the problems that you are trying to solve and then finding a different solutions against those problems, and then based on the impact, value complexity, identifying those top two, three candidates or one candidate where you can actually go. And start to build those [00:11:00] experimentation, build those experiments, and do experiments around that.

It does not mean that you just go in blindly and start using AI to create output. And then in the name of experimentation, you are submitting a wrong reports for the compliance or doing, uh, doing, uh, other, uh, or, or exposing organization to the risks. Uh, it should not be exposed to. Mm-hmm. So that's, that's not experimentation.

Again, you know, that's, that's not. Not, not how we do, we need to be thoughtful about our experiments. Mm-hmm. And then there are certain core elements, right? So you need to do your tax submissions, for example, you need to do your, your SEC submissions, right? Mm-hmm. There is no room for us to experiment on the output, right?

So output needs to be clear, but how we generate the input, we can always experiment, we can always try to find different ways or better ways of doing things. Mm-hmm. But that does not give me any Right. To not validate or [00:12:00] verify the output and just send it through to, to the, um, yes. To the, to the, to the, uh, board or to the everybody else.

Yeah. Regulator. Right. So, so, so experimentation needs to be thoughtful. Mm-hmm. Needs to be very clearly tied into your specific business need. It's not about that, you know, just because we want to use generative AI somehow. So let's do an experiment around that. That's not how we do experimentation. So, so that, that needs to be, that distinction needs to be very, very clear.

And I, I agree. I mean, sometimes we take that as our experimentation as a wrong, in a, in a wrong connotation. And think about it more like in fact that the, what we are doing is not experiments. Mm-hmm. What we are doing are making mistakes. Yes. So we need to make sure that we, we, we, we bifurcate and we, we separate the two.

I hope that explains a bit of, uh, a bit of, bit of that question. Oh yeah, definitely. And I'm curious to hear from, you know, your, your work. What have been some of those mistakes that you have seen? 'cause clearly one of [00:13:00] them is just not double checking the work, like we do it in accounting internal controls, right?

Mm-hmm. View the work or you sign a journal entry. Um, but what are other mistakes that you've seen people make as they're trying to be ready for AI or actually implementing ai, um, within their organizations? I think the biggest mistake and, and it sometimes, uh, I call it bit of a pandemic kind of a situation is, is becoming too much, uh, too much to tool or AI focused to be honest and not actually thinking about the business problem.

The very essence of experimentation, what I was talking about. If we are ignoring that, that's the biggest mistake in itself. So what happens in, in, in, uh, in, in various scenarios, I have seen that. We start the conversation. So I approach or someone approaches me around AI transformation or, or, or this kind of work that I do.

Many times the conversation unfortunately starts with, how can I automate this task? How can [00:14:00] I use AI to do this thing better? That's a wrong question to ask that I would sometimes say as close as doing a mistake, because what we do is, and when we, we start finding point solutions, which might give us some productivity benefits or might, you know, automate some of the task, but add that very specific boundary within which that AI would be operating.

AI's real value comes in when, when it integrates across the organization, when it fits into your broader vision, your broader organizational, uh, operating model for that matter. Mm-hmm. That is, I believe, the biggest mistake that we make when we are. I have seen, like, I mean, I have seen projects, AI projects.

Ranging from cashflow forecasting to revenue analytics, to process optimizations not performing well or not doing as planned, because there was nothing wrong with the approach. There was nothing wrong with the tool or the technology [00:15:00] itself or. It's, uh, it's appropriateness for that, that that problem.

Mm-hmm. But it didn't fit into the broader organizational operating model or the broader organizational structure people had, did not rely on the output. At the end of the day, people did not adopt that technology. I mean, having a tool and then using that tool is, are two different things, right? I mean, adopting deployment, so.

A lot of time. These are, these are the kind of, I would say, mistakes or, or, or not the right way of doing experimentation that I have witnessed quite a few times. Quite a few times, right? Sometimes I, I sound more like a broken record as well than I, when I keep telling people that, you know, please, let's be a little bit more, more thoughtful around our experimentation.

Let's not talk about tool first. Let's not talk about the task even. Look at the holistic system. What are we doing right? I mean, how our work is impacting. At the end of the day, remember, finance is, finance is [00:16:00] a, is a function of every other function in the organization. True. We cannot say that. Even the basic accounting work that we are doing can happen in isolation without the input from commercials, from operations, from for, for cost of good, sold for sales from the sales team.

It cannot happen. Mm-hmm. So how that linkage, how that. End-to-end workflow, end-to-end processes work in order for us to get the maximum utilization of AI or the maximum leverage out of ai. So that is, that is what, what I, I see a lot in, in the, in the work that I do. Okay. And so I'm curious, Sue, um, in your work, in your book, you talk a lot about.

Human-centric enabled, um, finance. So what needs to change first in how finance people think or behave for AI to work in our favor? Like what do we need to change? 'cause I know you said the first question sometimes is, how can I use it to [00:17:00] automate this? Well, I'm trying to make my life easy, so I'll rather likely ask the same question.

So what is a better question to ask? What actually needs to change for finance to be more ready for ai? I believe the fundamental change, which is needed in any finance function or for that matter, I mean, broader organization. Mm-hmm. Because they do work with broader organization as well. Mm-hmm. So is how do we build a digital mindset within the team.

Now again, as I, as I said, I sometimes sound like a broken record when I say that digital mindset is, is, is a, is a very simple thing. It is how well the finance team or business team is able to utilize the output of machine to solve a real world business problem. Okay? That is it. As long as you are able to connect the dots between the two, you have a digital mindset and what that enables us to become.

What we are supposed to do, we become better business partner and we actually navigate the ship as opposed to just being those stewards of the data. [00:18:00] Mm-hmm. Now, when we build that digital mindset, two, two things happen. One of, by default, we get process efficiencies, faster insights of course, because now you are able to connect that what are the different business problems I'm facing?

Mm-hmm. And how do I use technology holistically to solve those business problems. So as, as an output, you get process efficiencies, you get productivity, you get faster insights. Mm-hmm. And what that does. Secondly, and most important, I would say this enables us to do what we are designed to do as humans, what we are meant to do.

Critical thinking, business partnering and not that manual, tedious manual work that we are, we are, we, we spend so much time on, so as, as humans, I think we can do much better than what we are doing today. As finance leaders, we can do much better than how we are, we are operate. [00:19:00] Making our business plans or how we are doing our back office accounting for that matter, or how we are paying invoices to our suppliers and so on so forth, we can do a much, much better job.

I'm not saying that we are not doing a great job. I mean it's a finance is, is is one of the most intelligent teams in the organization and they are one of the most hardworking as well. But you know, how do we make them a little bit more smart workers? Yeah, using technology, and this is what helps us then build a human centric, tech enabled finance function.

And so building that digital mindset, when you think about building that digital mindset, what a practical step to get there, right? Like, 'cause I know it's, um, it's a bit out there. I see the bigger picture, but for somebody listening, I wanna build a digital mindset within my team As a leader, what do I need to do and what does my team need to do?

What do I need to expose them to in order to really elevate that mindset? Great question, [00:20:00] and I actually talk in quite a detail in my book as well, specifically on this building a digital mindset, so, mm-hmm. My argument is that we need to build two things. First of all is your digital literacy. Of course, right now, that does not mean that you need to become a data scientist or you need to become an AI engineer unless you want to, of course.

But. You need to understand that how different, uh, technologies that exist today, what can we use to solve some of the business problems today? What are the trends? Read a lot of books. I always recommend that there is no substitute to reading good books. Uh, understand, uh, there are a lot of free courses available out there where you can just get an understanding of what AI is all about, how it can help us solve some of those pressing issues that we face today.

What are the different, different methods within, within data analytics that we can apply and use them? And depending upon, of course, where you are in your, in hierarchy, in, in the organization, number one [00:21:00] and number two, by building a digital mindset requires us to have a growth mindset. So it is almost like a, like a combination of the two.

Digital literacy and growth mindset gives you a digital mindset. Growth mindset means that as a human we and as as, and that, that applies of course to both, uh, the wider teams. But mm-hmm. It applies sometime. I say that it applies more to the leaders in the organization because sometimes that's where, that's where people look up to and follow the lead and mm-hmm.

You know, follow their footsteps. If as leaders we have a growth mindset, who we believe that as humans we can learn any skill, we can, uh. You know, uh, we can mm-hmm. There is, there is no limit to human mind on what can be taught or what can be learned. Mm-hmm. We, we, we build that growth mindset and that requires us to do continuous learning, understand different aspects, and, and, and first of all be, be very open and acknowledging of the fact that [00:22:00] yes, I mean, I don't have answers to everything.

But I'm willing to go out and I'm willing to answer to, to find those answers. When you create that kind of a culture as a leader within your team, your team follows through your footsteps, and then they also work towards finding those answers. And when you are trying to find those answers, you will again, I mean, there is no simple one, one tool or one course available out there, or a magic.

Mm-hmm. Uh, training available out there that I can just tell you to do it and it'll get, it'll, it'll, you know, help you become a digital mindset. It's a continuous learning journey, and the continuous learning, as I always say, is, is not about how many trainings you are doing, how many, how many hours you are training hours, are you clocking or, or mm-hmm.

How many certificates you are getting. Mm-hmm. It's about acknowledging that it is about an about. The willingness to challenge your very own [00:23:00] assumptions when in the light of new knowledge. Nice. That's what growth mindset is all about. That's what continuous learning is all about. So it's all behavioral thing.

It has to be, it has to be ingrained within our DNA. We need to ingrained this in within our DNA in order for us to be able to build that digital mindset eventually. And it won't happen overnight. Mm-hmm. But we have to take the first step towards it. Thank you so much for defining that and I'm, and I'm curious, you know, I, you know, I, I understand and I've, and I've seen how inspiring it is for a team to see their leader learn something new and share it with them during meetings.

And they're like, okay, so they're coming up with this. I probably need to catch up. And, you know, I can see how it will contaminate, quote unquote, the rest of the team to kind of elevate their game. So as we look further and we look further out, if you had to list the key skills that an AI ready finance professional needs to develop over the next few years, what will be on [00:24:00] that short list?

Of course, the first one will be that digital mindset, right? As I always say. But Uhhuh, I think as, as as future finance leaders or as future business leader for that matter. Mm-hmm. There is one thing which is common. Which is going to be the common through line, which is actually the common through line at this stage as well.

And that is we are all change managers of the future. Change is the, is the, is the, I mean, it's a cliche that change is the only constant. Yeah. Right. So we need to become excellent at managing change. Which of course requires us to, to systematically understand what are the business problems we are facing, what is in it for people, how we keep them engaged, how we get them on board with the, with the change.

Mm-hmm. So first of all, we need to build our skills around becoming a better change managers, number one. Mm-hmm. Number two, with AI coming in. There is going to be a lot of, um, there is a lot of, of [00:25:00] productivity that comes out. Of course, there is a lot that happens as a, as as part of the, the, the, the output of technology.

Mm-hmm. But how that technology works and what it does is still. A black box, and that is where the skillset around responsible and ethical AI becomes the one of the most important ones in future as finance leaders, we are, yes, we and, and, and we are in, in a bit of a. Advantageous position, I would say because we have a bit of a head start with our training and, and our predisposition to risk and governance and control, we can actually lead the charge in, in this responsible and ethical ai as CFOs.

We can ask the questions that, why we should be doing something, what are the consequences of doing that? How data is currently flowing through within the organization, having those right. Making those right investment calls and right ethical decisions [00:26:00] and, and. Again, at the end of the day, responsible and ethical AI is a proportionate ownership.

It's not something that only CFO can do or CSO can do or CTO can do. Mm-hmm. But as a CFOs, as a finance leaders, due to our position within the organization and how people look at to us for the ethical and and governance decisions, we can influence that a lot. So that is one skillset which I always recommend a lot of finance, uh, which I always recommend to finance leaders to definitely get.

Um, head on, get understanding of various aspects around what are the different regulations coming in, what are the different, um, uh, different policies, procedures, and, and recommendations that come from different organizations around global organizations like UNESCO and, and, um, and OECD around responsible and ethical ai.

Number two. And then I think the, the, the, the, the another important. Skill that we need [00:27:00] is when AI is determining the most of the output, and now we are talking about agent ai, which is actually have that agency and doing stuff for us. A lot of times we get there is this, this, this curtain of technology or curtain of AI that sits between us and the humans on the other side.

So understanding the emotions of humans within the loop, within, within that orchestration of agent AI and all the technological transformation is the key aspect. And what I, what we can also call emotional intelligence. Mm. That is going to be one of the most, and in fact, that is one of the most important skillset to have and to build with us in order for us to be able to, because again, when we talk about technology, our first reaction is, oh, it is going to take away jobs.

Yes. We forget that [00:28:00] jobs or roles is, is just, is just a, a description on a piece of paper. There are real humans who are working around those tasks and roles. What are their emotions? Being aware of? What are their. Training needs. How does that, these, our decisions are impacting the people. They're not just the numbers in, in a or, they're not just a, just a cog in a machine.

They are real beings, conscious beings who have. Emotions. Who have social structures. Who have social needs. So how do we, how do we remain aware of their needs when the technology comes in and take away and automate roles or automate tasks? How do we separate humans from tasks and how do we understand their needs and how do we actually help them grow as well?

While, while of course making some tough calls as well. And they will be tough calls, right? I mean mm-hmm. I don't, I don't, I don't hide away from the fact that roles are not gonna get [00:29:00] impacted. They will. Mm. But how we manage those change again, goes back to change management, as well as being emotionally aware.

Mm-hmm. Is, is very critical skillset that we need. Wow. Um, thank you so much. That was great because you, you started with, you know, when we talk about the skills that people will need to develop, digital mindset, definitely change management, having emotional, uh, uh, emotional intelligence, and also the idea of responsible, um, and ethical ai, which that's a part I don't think we talk a lot about.

So when we think about responsible. Ai. What are some examples of cases maybe where people used AI in not ethical way, but it made business sense, so it was a good shortcut of productivity that happened, and what should people really be? Be aware when it comes to using AI from a governance and regulations standpoint.

You, you are absolutely right. First of all, that we don't talk about it a lot. [00:30:00] Right? Mm-hmm. And, and, and the reason being, the technology takes precedence. Business use business case takes precedence and then governance and these ethical issues or responsibility icons. As a, as a second thought, we have seen a lot of examples, uh, which are like pretty.

Available, published as well examples out there where it made a business sense, but it was not ethically, ethically right to do so. Mm-hmm. For example, for example, first of all, before, before I give an example of, or, or, or, or a case study around that, if we have data from our customers, right, and we are just using it.

We are just, uh, getting that data while we are, of course, you know, selling them or transacting with them. Just because we have that data does not mean we own it or we can use it to train AI models. We need explicit consent from them. That is where these ethical things, ethical dilemmas comes into play. We need to be very a hundred percent sure that yes, our [00:31:00] customers, our consumers are willing to give us that data or they have already given us that data, but they're also willing.

Willing for us to use that data to train our algorithm. Yeah. So for example, I, I give this example of, of ever app. There was this. This app, which was something like Google Drive where people would upload their photos mm-hmm. And save their, their photos. So they were using what they were doing, they were using those, uh, photos or, or people's pictures to train some of the algorithms on facial recognition.

And then were marketing and selling those, uh, algorithms to the, to other companies. So FTC, federal Trade Commission, they imposed on them a penalty called. Algorithmic discouragement. So one thing is ethical, but the other thing is the regulatory impact as well. And you know, getting in the public eye as well.

What that meant for them was they had to destroy all their, um, algorithms and, you know, had to pay fines and so on so forth. So [00:32:00] again, we just don't. We, we, we should not be ethical because just we should not just be only be ethical. Because if we have a, if we have, um, uh, if, if we, if someone, if we have a regulator sitting on top of us, of course we should always be ethical, but there are of course those consequences as well.

Secondly, one of them very important. Aspect that comes out of ai. And, and sometimes we, we, we see that a lot, and a output comes from ai and we say that, oh, this is the output from the machine. This is the output from the ai. So as a human, I'm not responsible for it. Hmm. However. Is the basic premise of responsible and ethical AI that any output by machine is the responsibility of AI or is the responsibility of a human.

At the end of the day, we are responsible for the output of [00:33:00] the AI machines are not. So taking that ownership is critical. And that is where, where that proportionate ownership comes into play. Our CFO's role also can, can play a, CFOs can play a big role in enabling that to happen. That Yes, we are saying, so I always say that use, uh, I, what I, what I advise is that whenever you have these agent workflows of today, or AI of today mm-hmm.

Follow a new orphans policy. Every AI must have an honor. And that honor should be responsible for the output of the ai. If something is going wrong, that honor, they should be raising the, they should be raising an alarm. They should be raising their hand and saying, this is not working, or this is how it's So, I mean, again, it's a complex topic.

It is. It is an, and I would say it's an evolving field because we, what happened, unfortunately, I would say, or whatever, for whatever reasons, commercial reasons, or, or, or, or corporate reasons. We built technology [00:34:00] and we did not think about responsibility and ethics till a later stage. So even now, we are building technology at a much faster pace than responsibility or ethical practices can actually catch up.

Wow. Or sometimes I, uh, I say that we are, a lot of corporations today are building technologies around AI and responsible as, uh, sorry, building technologies on around ai and they're expecting someone else. To do those regulations and ethics. So they, they, their role becomes more about innovation. Mm-hmm.

And they expect governments or other, uh, regulators to come in and build responsible and ethical AI frameworks. That is not how it'll work or that is not how it is sustainable. Mm-hmm. Right from the beginning. Right. When we are writing the first code for any algorithm, we need to have responsible and ethical frameworks in place.

Which unfortunately is not, is not [00:35:00] that prevalent at this stage, but we are moving in that direction. There are, uh, people and organizations who are actually trying to get ahead of, ahead of, uh, of this innovation, um, innovations pre. We call it. Wow. But there is so much pressure though, and also so much money to be made.

Right. So that's usually what gets us in trouble. Yeah. Yeah. That's the, that's the reason, right? I mean, that's what I was talking about, that the commercial reasons have made it much more faster. To deploy and, and just you start using AI without understanding what are the responsible, ethical, social implications of those?

Right? I mean, we don't know today that the kids that are using, we, we, we, we didn't know about social media, how it's gonna impact us. We don't. Have a clue what AI is going to do for us. Very true. So going to do with our brains. Going to do with our, with, with how in a society we are operating, um, as, as a human, um, as a human society.

The way we are operating right now, what AI is going or [00:36:00] going to do with that, we don't know because we haven't really thought about it. It is coming as an afterthought once everything, everyone has started to actually use ai. Yeah, it's a bit scary as well. It's, it's, it's very scary. And I'm curious to hear from your standpoint too, as you think about the current organization, the current structure or org chart of a finance department, what do you see as roles that will.

Disappear. How some of the current roles will evolve. Like who will be part besides AI agent? 'cause now I, I'm getting used to the idea that it will be AI agent on a, you know, on finance teams. Like, beyond that, like how are those roles that we used to going to evolve? It's, it's the, again, first of all the, the, the.

The org chart is, is one thing, is for sure that org chart is going to change, right? Mm-hmm. It is, and it is already, already. We, we see that happening in, in, uh, [00:37:00] in, in, in many organizations today. Now many of the roles might not even exist today. So for example, when AI agents, as we talked about, when they are carrying out the actual execution solving problems, we will need humans in the loop.

And these could be your fp and a AI controllers, your AI agent coordinators, your mm-hmm. AI controllers. I, again, it's very hard to say what, what they will look like. Yeah. What those role will exactly look like. And I acknowledge that Uhhuh, we don't know a hundred percent Uhhuh we can think of, of, of having some of those humans in the loop within that org structure.

Of course, definitely. But again, we don't know clearly at this stage that, how that exact organization are going to going to look like. Mm-hmm. Having said that, of course there will be roles that will disappear. Mm-hmm. Those, those, those roles that are, that have some problems, even, even the roles which [00:38:00] requirement and some problem solving, AI agents can actually do that quite, um, quite diligently and in a quite efficient way.

We will have a lot of new roles, as I mentioned coming up, which will help us to ensure that AI agents are performing their tasks correctly. Mm-hmm. Or, um, identifying their objectives or defining their objectives or, or what problems they're going to solve. Wow. Now, irrespective of whatever happens. In that scenario, how that future org structure looks like.

It's evolving at this stage. So it's hard to say that whatever, it'll exactly look like one role or one skill is going to stay. And that is, I think is, is going to become the most relevant for anyone in the org structure. And that is the critical thinking piece with ai. We have too many answers. True. You ask something, you, you just ask a simple.

Anything, you know, do this or do [00:39:00] that, it'll give you a page full of recommendations and stuff. Mm-hmm. The most important skill is the ability to ask the right questions. As humans or as finance leaders, we will need to be able to, to identify or define those objectives. Or the right, we, we cannot just say it to AI agents or just do this process, this invoice.

We need to be able to ask the right questions. We need to be able to define the right objectives for us to be able to leverage the maximum out of these machines. Mm-hmm. Via an output of machine is what it is. We should be able to ask those questions. We cannot just rely on the output of machine, become victim of automation bias, because if it's, if it's created by machine, it must be correct.

No, we cannot do that. Yeah. We have to have that critical mindset, that critical thinking that why should we even rely on the output of a machine? Mm-hmm. This ability to challenge [00:40:00] the status quo, I think is, is the single most important skill or the role that humans are going to play in that, in that new org chart.

So everyone is, is going to become that, that change manager that mm-hmm. Critical thinker for them to be able to orchestrate, orchestrate these different machines or these different agents together to solve the business problems that we are trying to solve. Yeah. And I can see how you have to have that critical thinking to answer the right question, but also validate, like you said, whatever the output, like you need it on both side from end to end, um, which is quite, quite fascinating.

But I'm curious to also hear, um, what are some AI cases that. You think CFOs should try? Um, like, or maybe you a use case where it sounded great and it didn't go so well. I'm, I'm curious you any success stories or any, oops, it didn't work out as we planned stories. Yeah. So, uh, again, I mean, uh, to answer your first, first part of [00:41:00] the question that mm-hmm.

Uh, what are some of the use cases that ai, uh, let CFOs should, should try? Now, I always say that it all depends on, first of all, the business needs. Right. What is our strategy? How we are, what is our broader vision that looks like? So, so I mean, rather than going a use case approach or point solutions, look at the bigger picture, but mm-hmm.

As a, as a principle. As a principle, I would recommend go for the well established and mature solutions in the beginning. You don't need to. Yes. AI agents or generative AI looks very fancy on the paper. Mm-hmm. But that technology is still evolving. It is untested in many business scenarios. We have seen it, it not scaling in MIT's report, that famous report that came up a couple of months ago, that 95% of ai, um, pilots cannot scale.

Because I mean, the technology, it's nothing wrong, nothing, nothing inherently wrong with the technology, but it's just in, its in its maturity phase. It's going through that hype cycle right now where, [00:42:00] where it'll eventually get mature and it'll eventually have more use cases. So go for those well-established use cases like process mining to to understand your business processes.

And secondly, I mean, I always say that if you are in 2026 and you are still using Excel to, or, or, or, um, you know, you'd still do doing manual three-way matches. You are, you are not in 2026. You should be using AR and a, you should be using AI for your account receivable, account payable. They are much more, there are a lot of established solutions out there.

Well tested in the market. You can, you can actually, uh, get a lot of, um. Uh, um, sort of endorsements from, from other, uh, other industry partners as well to understand that okay, what, how did it work? Or how it did not work. So there are a lot of feedback available for those, uh, for those tools as well. So you can actually, actually find a good, mature solution because these problems have been quite.

They're common [00:43:00] across, across all the verticals, mostly across every organization has ai, AP problem. Every organization had bank con problem. So there are quite good mature solutions available out there, which you can then try on and use. So go with those in the beginning. And, and then more on the, on the planning side, go for demand forecasting, again, quite a mature, uh, predictive analytics capability is, is out there.

People are, are now actually being able to predict their revenues. And I worked on, on, on something as well, a couple of, uh, a couple of months ago. Revenue analytics or, or demand forecasting is another. Very well established and a good use case for AI to, um, for finance to, to start their journey in the beginning.

And of course, it depends on, again, as I said, a caveat that it depends on what's your current business need. It might not be demand forecasting. But having said that, to, um, thinking in those directions, the principle remains the same. That go with the, go with the traditional [00:44:00] ai and think about those, those traditionalistic, um, uh, more deterministic, um, uh, artificial intelligence.

Okay. One use case to coming to your second question around that use case, which didn't work well, I think is, is more around, um, and I, I give this example quite often in my workshops as well, is around the cash flow forecasting. Okay. Somehow a couple of years ago, we were sold, uh, when, you know, before generative AI and all that type, we were sold this, this concept that AI is predictive analytics is going to just use your ca You just forecast your cashflow and all your trade needs right away and you know, your, you'll just have your cashflow forecast.

That, I believe is, is a bit of, and I have seen that not working because. That is a bit of an oversimplification of the problem at hand. Hmm. Cash flow at the end of the day is not just. A number. It's a function of your receivables, payables, your funding, your financing activities, operating activities, so on and so forth.[00:45:00]

It is much more complicated than just a simple number. So if we are just using historical cash flows to predict a future outflow, we are just using a statistical model. We are not really using predictive and analytics in, its in its true sense. What we need to do in order for us to, to make the right treasury decisions or cashflow decisions, we need to in fact be predicting our accounts receivable correctly.

We must be predicting our, uh, accounts payable correctly. Our um. Other cash inflows and outflows correctly in order for us to then eventually get that cash flow forecasting, right? Mm-hmm. Um, I see a lot of times, um, in fact in one particular case is, see, I saw that happen where, um, the organization just used a simple historical cash flows to predict the future cash flow, which is again, nothing better than, you know, what I would do as a.

Moving average or, or, or the last, last 12 months moving average. And then just say that, you know, this is what, it's uhhuh, it's, there is no underlying business driver [00:46:00] that you are going to gonna be used using to predict that cash flow. Mm-hmm. So that's, these are some of the things we need to really think about in a little bit more deep, deeper way.

Unpack what. It's all about, and again, it comes down to that building again, that digital mindset. The ability to critically think and evaluate, ask the right questions and understand what is it that we need to need to really solve, and then go from there. Wow. Thank you so much for sharing. And I, and I think sometimes we oversimplify thing because it sells versus the actual substance.

Um, 'cause I was in the midst of, you know, implementing a, a new system and they were like, oh, with this new system it has ai, it will allow you to do X, Y, Z. And. Yeah. And, and and the, the, the epic one is that where people just look at, just get that charge GPT enterprise version or, or copilot enterprise version, and they expect that to somehow transform [00:47:00] their data, somehow transform their operations.

Yeah. That's not what CHARGE GPT is meant to do. It's a very different. Sort of a chat bot, which can work in a very different way for, for your data to work, you need to have something else. You need to build data pipelines. You need to build, um, a data architect, right, data architectures, right mindset to be able to use that.

So again, oversimplifying this, this, um, or, or seeing generative AI or AI as a magic, um, as something like a silver bullet, that's pretty common. You are absolutely right. Yeah. So I'm now curious, how would you finish this, this sentence? The finance teams that will thrive with AI are the ones that, okay, that's a good one.

Uhhuh, the finance teams. Are you, uh, do you mind, uh, saying that again, the finance teams Yes. That will thrive. So how would you finish this sentence? The finance themes that will thrive with AI are the ones that [00:48:00] are the ones that will. Reimagine the fundamental of business operating model. Mm. Again, as I always say, now, to unpack this a little bit, we need to re, that's what I, even my book, reimagine Finance is all about, right?

Mm-hmm. We need to reimagine what ai, how AI can help us transform our business operations. How do we fundamentally redesign our operating models? Because the current operating model is not working, is not going to work in the new digital DNA or the. Uh, with, with, with ai, not just helping us do work, actually doing the work.

So you need a redesigned or reimagined business operating model. Okay. Thank you. That was a good one. Thank you. Thank you so much for sharing. I, I have a last question's gonna, coming up on time here, so away from your work and ai, I'm always curious to hear what's your favorite thing to do outside of work?

Oh, I love, I love music. You might [00:49:00] just notice that there is this, uh, so I like, uh, Pakistan classical music. Um, so I do a bit of that, uh, helps me with, you know, uh. A bit of, um, uh, refreshing my thoughts and, you know, a bit of a of, of, uh, of, of, it's, it's quite refreshing actually. Okay. Um, then I'm also, I also do a bit of i oil portraits.

I'm a bit of, uh, I have this, uh, thing around, around arts art. Oh, wow. So I do portraits as well. And, uh, I'm a bit of a history buff. Uh, in fact, that's why my, uh, a lot of, uh, references in my book as well are based on, on, on, on, uh, human history. So I love that as a subject to understand how as humans we evolved, how we we are today, how we, how it all, all.

How did we get here? Yeah. So that's something which fascinates me a lot and I think a lot of that [00:50:00] has implication towards where we are going. Yeah. And with AI in the picture, how that is going to look like. So, so yeah, this is what I are, what I like doing when I'm not talking about ai. That is so good.

That is so good. And I like, I, I love history as well. 'cause it reminds you that you think you're different. But you're really not. You're really not. Yeah, yeah, yeah. We, we, we are overestimating our intelligence. See, I mean, uh, we, we have already built things which can actually. Totally destroy humanity, right?

So I don't think so. We are very smart. We're probably the only, as you all, no. Harari says one of my favorite historians, he says that, uh, he is my favorite historian. Mm-hmm. That, uh, we are the only species who has the ability to destroy ourselves fully. So that's, uh, but it's so true. Yeah, it's so true.

Well, thank you. [00:51:00] Thank you so much, Derek, for being on the show. I really, really enjoyed our conversation. Same here. Thank you so much, uh, WIA for having me on the show, and I loved our conversation as well. Thank you. Thank you. Thank you for tuning into another episode of Diary of a CFO podcast. If you found the conversation helpful, please don't forget to leave us a review on Apple, Spotify, YouTube, wherever you find it.

'cause it really helps getting these kind of insights in front of other finance leaders. If you wanna go deeper, don't forget to visit the diary of a cfo.com for additional resources that you can use in your career today. The site features a free CFO readiness assessment where aspiring CFOs can answer 25 questions to get their personalized readiness score and see exactly which skills to develop.

There's also a newsletter with actionable insights for finance leaders looking to advance their careers and thrive in CFO roles.