Open AI unveiled an upgraded version of ChatGPT Tuesday that is capable of passing key academic exams with ease but still has lingering issues with producing “biased” content, according to the artificial intelligence company.
GPT-4 was tested on multiple exams intended for humans including a simulated bar exam, which it passed with a score ranking “around the top 10% of test takers” compared to GPT 3.5, which ranked around the bottom 10%, according to Open AI. Like former models, GPT-4 still produces “biased” content and viewpoints, but it scores 19 percentage points higher than GPT-3.5 in factuality.
GPT-4 excelled at other exams it was tested on, scoring a 1,300 out of 1,600 on the SAT and five out of five on numerous Advanced Placement high school exams, which is a major improvement over GPT 3.5. The ChatGPT chatbot was initially released in 2022 and was a disruptive advancement in artificial intelligence, according to FOX Business.
Greg Brockman, president and co-founder of OpenAI, demonstrated some of GPT’s capabilities and limitations in a GPT-4 developer livestream on Tuesday. In the finale, he showed how GPT-4 could do taxes, pasting in 16 pages of tax code and asking the chatbot questions such as what the standard deduction and liability would be for a fictional couple.
GPT-4 also scored significantly higher in safety measures, decreasing responses to requests for prohibited content by 82%, according to OpenAI.
GPT-4 is currently only available to OpenAI users paying for ChatGPT Plus and as an API for developers, according to TechCrunch.
All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact [email protected].
Republished with permission from Daily Caller News Foundation