Author
Blake Wood
Publish Date

A recent study conducted by finance professors at the University of Illinois Springfield explores the accuracy and reliability of financial advice provided by ChatGPT, the popular artificial intelligence-powered chatbot. The study, titled “ChatGPT, Help! I Am in Financial Trouble,” was co-authored by Minh Tam (Tammy) Schlosky and Serkan Karadas, both assistant professors of finance in UIS’ College of Business and Management, along with Sterling Raskie, a senior lecturer in finance at the University of Illinois Urbana-Champaign and adjunct instructor at UIS.

The researchers tested ChatGPT’s financial advice by feeding the AI 21 different scenarios in which individuals might seek financial guidance. These included topics such as investments, debt consolidation, mortgages, gambling, medical expenses and managing unexpected financial windfalls.

Schlosky and Karadas reviewed ChatGPT’s responses from an academic perspective, while Raskie, a certified financial planner, assessed the advice from a wealth management standpoint. The results of the study, published in the Journal of Risk and Financial Management, found that while ChatGPT offered seemingly practical steps for managing finances, its advice often lacked a prioritized plan of action and critical elements present in professional financial counseling.

“ChatGPT was so ready to tell the advice-seekers what was wrong with their situations and to give them an incoherent laundry list of solutions that may or may not apply to their unique situations,” Schlosky said. “It provides suggestions with so much confidence that people who may not be very familiar with finance may inadvertently believe that there are no other solutions to their problems than those recommended by ChatGPT. I still think that ChatGPT is a good first stop for getting helpful information, but we need to keep its limitations in mind and hope that future iterations of this amazing tool become humbler.”

The study highlighted the chatbot’s shortcomings in areas such as saving for college, where ChatGPT failed to recommend a 529 savings plan, and retirement calculations, where it made basic mathematical errors. Additionally, the AI failed to account for important legal and ethical considerations, such as the risks associated with a family member managing investments on behalf of an elderly relative.

“One issue that I want to highlight is that ChatGPT always finds something wrong when someone faces an unfortunate situation,” Karadas said. “For example, we have a case where a financially responsible person gets diagnosed with cancer and burns through all his savings for his treatment. ChatGPT still said that he should have saved more. I do not think a human advisor would say that.”

Despite the chatbot’s limitations, the researchers noted that ChatGPT could serve as a useful starting point for individuals organizing their thoughts and exploring basic financial options. However, they agreed that AI is unlikely to replace human financial advisors anytime soon.

“If somebody wants a true professional legal fiduciary opinion, I still think they're going to use a human versus ChatGPT,” Raskie said.

The full study is available in the Journal of Risk and Financial Management.

News Categories