What ChatGPT Means for Finance

According to Gartner analyst Mark D. McDonald, some finance professionals are actively learning about generative AI and starting to experiment with it—and they’re the ones who will succeed in the long run.

Within the past several months, ChatGPT has become a cultural phenomenon. People in all kinds of roles, across most every industry, are working to figure out how this type of artificial intelligence (AI) can, should, and will affect their future—both personal and professional.

Treasury & Risk sat down with Mark D. McDonald, a senior director, analyst, at Gartner who specializes in the application of AI technologies to finance processes, to get his take on what tools like ChatGPT might mean for treasury and finance.

Treasury & Risk:  How do you think ChatGPT is likely to affect corporate treasury and finance organizations?

Mark D. McDonald:  To start, I want to be clear that ChatGPT is an example of an AI use case. It’s not the holy grail or the final resting place of where AI technology is going. It’s an iterative step along the way. Further down the line in this process, we might have a technology that can understand and answer specific business questions, and maybe even make suggestions for that specific business. But ChatGPT does not do that.

ChatGPT is trained on data that’s available on the Internet; it doesn’t know proprietary information about a specific business. What it does do is it shows where we are going, and that is extraordinarily useful. In the future, generative AI tools might incorporate data specific to a given business, which could offer a wide range of use cases—everything from answering general questions about corporate performance to proposing journal entries or validating whether certain journal entries look correct.

T&R:  Is answering treasury professionals’ questions about their bank statements, or providing natural-language communication with treasury software systems, another use case that you foresee for generative AI?

MDM:  Absolutely. I don’t just foresee it; that is available now. A variety of vendors provide a new generation of chatbots driven by the same technologies that are behind ChatGPT. They provide an interface with internal and external customers, and they are available to buy today.

 

 

T&R:  Should treasury and finance professionals be concerned about the accuracy of information coming out of these systems? It seems there’s a real accuracy risk with ChatGPT because it’s not clear what quality of source material it’s drawing on.

MDM:  That’s absolutely right. People want to race off to get all the benefits as quickly as possible, but we have to be cautious. There’s no guarantee that the answer a generative AI system provides is going to be correct. You get whatever information the system deems to be the most statistically relevant answer to the question you’re asking. Which means that sometimes it’s going to be wrong. Depending on what you ask and how much you’re relying on the AI tool, a wrong answer could range from not a big deal to a devastating impact.

T&R:  What’s the path forward for treasury and finance professionals?

MDM:  What I recommend to clients is to develop an understanding of what these systems are capable of by starting with small use cases. The first step is to get a feel for which circumstances they work well in, and which types of questions they are less likely to answer correctly.

T&R:  So, go through a rigorous testing process to see when it’s right and when it’s not?

MDM:  Yes. And the fundamental challenge isn’t just for treasury. The general challenge is that these systems appear human-like. You might think you’re talking to a knowledgeable human being, somebody who understands the nuances in human judgment and human interaction. It’s not a human, but we don’t yet have a feel for the difference between a human and a human-like algorithm. We haven’t yet learned as a society or as people—or as finance managers—how to tell the difference.

T&R:  Are there any specific controls you would recommend having in place before utilizing generative AI to come up with answers to finance- or treasury-related questions?

MDM:  Well, regardless of how you’re using an AI tool, you need to be able to easily validate any answers the system gives you. That doesn’t mean re-creating every answer, because that would defeat the purpose of asking the software in the first place. But humans do need to validate what comes out of these systems.

It’s important to remember that, in the realm of finance, if the system’s not right and you rely on that information for decisions, you are liable. Not the system. Pointing to an AI tool as the provider of faulty information is not going to sufficiently explain why something went wrong with corporate financial reporting, for example. The people responsible for a financial report must validate any responses coming out of an AI system, to make sure they understand and are comfortable relying on that information.

T&R:  What actions should treasury and finance teams take to validate AI-generated information?

MDM:  Part of it is identifying what types of questions the system can reliably answer. Like I said, treasury and finance staff should start small. When we want to ask a new category of questions, we should experiment, then check to make sure that the algorithm has been trained with enough information to provide answers that we’re confident we can rely on.

T&R:  Could tools like this be helpful in generating financial reports or board presentations?

MDM:  I do not recommend using the current version of ChatGPT or any publicly available solution to do that. When you give financial information to one of these algorithms, the algorithm then owns that information, and conceivably someone outside your company could ask the same question you did and it could help them understand something that you haven’t released yet. Treasury and finance professionals definitely need to be careful about what information they share with this type of system.

That said, these tools could be helpful in a couple of ways. One way would be to gather facts that might be relevant for some external-facing or board communications you might have. And then after it’s gathered facts that you’ve deemed relevant and validated, maybe ChatGPT could generate an intelligent-sounding script.

But under no circumstance—for the next several years, at least—would I blindly rely on an algorithm to generate my board reports for me, or to create a public statement. You can use it for help in generating those communications, but with the same caveat: Make sure that you’re validating the information you’re using and recognize that you’re responsible for that information.

T&R:  Do you expect these tools to replace treasury and finance jobs?

MDM:  Generative AI tools will affect people’s careers in the same way as all the other revolutionary technologies that have been introduced in the past century. Certain roles are going to get shifted. There will be displacements and replacements along the way, and it is hard to say right now exactly where that will happen.

When generative AI reaches the point of maturity, these tools won’t necessarily replace people. However, they will likely result in people who know who to use these technologies replacing people who can’t use the technologies.

T&R:  So it’s something finance professionals should be prepared to learn to harness so they don’t get left behind.

MDM:  Not just be prepared, they should be actively working on this right now. They should be learning how these systems work. Recognizing the difference between humans and human-like algorithms takes a long time. It can take years for individuals and organizations to learn how to use AI safely and reliably.

In addition to getting started experimenting with some small use cases, treasury and finance professionals should be learning the basics of data science. We don’t all need to be software engineers, but the treasury and finance functions manage a lot of data, and the techniques that are behind machine learning and AI and ChatGPT are very relevant to how we handle data. Learning how ChatGPT and similar tools work prepares us for the next wave of AI, no matter what that looks like.

 


See also:


 

T&R:  Do you think tools like ChatGPT will increase the risk of payment fraud by making scams like business email compromise look more believable?

MDM:  Absolutely. Most certainly and most definitely. Even scarier than that is using these technologies to tailor a scam to an individual, finding an avenue to exploit that particular person’s weaknesses. These technologies are also very accessible, which makes it easier for more people to commit payment fraud.

T&R:  How can a corporate treasurer protect against this increased threat?

MDM:  One of the avenues we currently take is teaching people how to avoid scams. We’re going to need to continue to do that. We can also use technology to help us identify things that are fraudulent. Of course, that will need to be ramped up, and that cycle is probably only going to accelerate.

The bottom line is that we all need to be prepared for the increasing sophistication and prevalence of generative AI. The next 10 years will probably bring the biggest change in treasury and finance functions that we have seen in our lifetimes. Anybody who is currently waiting to see what happens will likely be behind. Other people are already actively learning about generative AI and starting to experiment, and those are the people who will succeed in the long run.